When to use VDI, when to use server-based computing, and how the Citrix Ardence dynamic desktop fits into all this - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

When to use VDI, when to use server-based computing, and how the Citrix Ardence dynamic desktop fits into all this

Written on Mar 14 2007
Filed under: , ,
60,561 views, 22 comments


by Brian Madden

VDI, or Virtual Desktop Infrastructure, is quickly entering the buzzword danger zone. At the most basic level, VDI technology is a new method for delivering desktops to users. Of course users have been using desktops for years, at first running locally on their own PCs, and more recently by accessing remote server-based computing (SBC) desktops running on Microsoft terminal servers or Citrix Presentation Servers.

Now that various VDI technologies have hit the market, peoples’ reactions are all over the place. Some people are talking about how VDI will replace or compete with SBC and traditional technologies. In this article I’ll explain why this isn’t the case, and how all three technologies (VDI, SBC, and traditional desktops) can be used together to provide a holistic desktop delivery solution for a company of any size.

I’ll also explore the technology that makes VDI a reality and discuss some of the roadblocks that may be encountered along the way. I’ll talk about the emergence and importance of a concept known as the “dynamic desktop,” and why this is needed for a “true” VDI solution.

Finally I’ll provide a quick overview of Citrix’s Ardence solution and describe how it can enable organizations to truly realize the “on demand” desktop, whether it’s VDI-based or traditional PC-based.

What is VDI?

The idea is simple. Instead of giving a user a local PC running a local copy of Windows XP, you run the Windows XP (or Vista) desktop software in your datacenter. Then your users remotely connect to and control their own instance of their Windows desktop in a one-to-one manner from their own client device.

In doing so, the user can use any client device they want to access “their desktop.” If you replace a user’s desktop with a thin client that automatically connects to a Windows XP machine in the datacenter when the client is powered on, there’s a good chance that the user wouldn’t even know they were using a remote desktop.

In reality, no one would implement this by stacking Windows XP desktop computers floor-to-ceiling in their datacenter. Instead, VDI is typically implemented by building huge VMware servers running many Windows XP VMs or by using high-density blade servers running Windows XP.

Why use VDI?

So why would anyone do this? To understand it, we first have to look at the alternatives. The role of an IT department is to provide applications for users. In order to use an application, a user needs a desktop. (Be it a Windows desktop, a browser window, or something else, there has to be a backdrop that has some method for users to select and launch applications.)

VDI is about providing desktops to users. Before VDI, there were two other ways to provide desktops to users:

  • The old way, with each user running a local copy of Windows XP on their own local desktop or laptop computer. (Hereinafter “local desktop”)
  • The server-based computing (SBC) way, with each user connecting to a remote desktop session running on a Microsoft terminal server and/or a Citrix Presentation Server. (Hereinafter “SBC desktop”)

The VDI approach adds a third option to this mix. Therefore in order to answer the question of why anyone would want to use the VDI option, we have to look at how the VDI option “competes” against a local desktop or SBC desktop solution.

VDI versus local desktops

When comparing VDI to a local desktop solution, you’ll see that the VDI option lets the users enjoy many of the benefits of traditional local desktops while also adding some new benefits.

VDI advantages over local desktops

  • Data containment
  • Desktops are running on server-class hardware
  • Client device independence
  • Ease of management

Data containment. Since a VDI solution means that users’ desktops are running on servers in a datacenter, the C: drive of each desktop is also running in that datacenter. That means that all data is automatically contained within the walls of the datacenter.

Desktops run on server-class hardware. Since desktop computers are distributed throughout an organization, they don’t have the same redundancy as server-class hardware. A single power supply, drive, or memory failure can take down a desktop computer. Of course the same also applies to servers. However, since there are many fewer servers in an organization than desktops, it’s okay from a financial and risk standpoint to spend money on redundant power, RAID, memory, and other technologies to ensure that server hardware doesn’t have the same potential hardware failures.

Client device independence. In a VDI environment, the ultimate “client” device is essentially nothing more than a screen, a mouse, a keyboard, and some mechanism (RDP, ICA, etc.) for connecting to remote Windows XP desktops. This means that the client device can be just about anything—a thin client, a Mac, a laptop, or a UNIX workstation.

Ease of management. If you have to manage 1000 desktops, which would you rather manage: 1000 physical desktops scattered all over the place, or 1000 desktops contained in a single datacenter? The simple fact that the client “workstations” are all in the datacenter can have a profound effect on management, patching, backups, provisioning, etc.

Local desktop advantages over VDI desktops

Of course VDI is not for everyone, and certainly there are a several advantages that the “traditional” local desktop model has over VDI architectures.

  • Local desktops can be used offline (i.e. laptops)
  • No single points of failure

Local desktops can be used offline. This is probably one of the biggest downsides to VDI. In a VDI environment with everything running in the datacenter, if the network link goes down between the client device and the datacenter, or if the user wants to be mobile with a laptop, then the whole VDI concept breaks down.

It's worth noting that there are some novel solutions to this. For example, you could install VMware on a laptop and then copy the user’s VMware disk image from the datacenter to that laptop for them to use on the road, but that introduces its own challenges that are beyond the scope of this article.

Local desktops to not have a single point of failure. Even though server hardware is very redundant, if something happens to your back-end servers in a VDI environment, you’re totally out of luck. Compare that to a traditional desktop environment. If one PC fails, that user is out of luck, but the other users can still work.

VDI versus SBC desktops

The other option for providing desktops to users is via server-based computing. This is kind of interesting now because in many ways this was the first VDI solution, and it’s been in place for over ten years. In fact, Citrix didn’t even introduce seamless application publishing until 1999, so anything “SBC” before that was full remote desktops. Of course we didn’t know to call it “VDI” back then, but that’s what it was.

However, today’s Windows XP-based VDI is very different than today’s terminal server / Citrix-based desktop publishing, even though they both fundamentally solve the same business goal (desktops to users).
Let’s compare these two technologies and look at where each has an edge.

VDI advantages over SBC desktops

  • Better performance (from the users’ standpoint)
  • No application compatibility issues
  • Better / easier security
  • You can "suspend" individual VMs and then move them from server to server
  • The clients run the "workstation" version of software
  • Users have more control over their individual desktop
  • Users can take their sessions with them when they go offline
  • Easier backups

Better performance. (In theory, anyway.) Any performance gains might depend on whether your VDI Windows XP desktop backend is made up of blades or regular servers running VMware. Obviously if you only have one user (or a handful of users) on each blade, then your users can run bigger and more powerful applications without negatively affecting as many users as in a terminal server environment. If you're using VM software to cut a huge server into dozens of Windows XP VMs, then you will have the ability to partition the resources for each VM in a different way than regular terminal server or Citrix SBC sessions.

No application compatibility issues. With VDI, each backend Windows XP desktop is a full standalone workstation. This means that you don't have to worry about applications that are not terminal services-compatible.

Better / easier security. Since each user would have his own standalone Windows XP desktop, you don’t have to worry as much about locking down each user's session. If a user screws something up, he won't affect other users.

You can "suspend" individual VMs and then move them from server to server. If your backend Windows XP VDI infrastructure was based on VMware, you could have some cool flexibility for doing maintenance. Imagine a scenario where you could hit a button in a management console to "move" a user to another server. Maybe the user would receive a popup box that said "Please wait a moment." Then the server would dump the memory contents of the Windows XP desktop VM to a disk, a VM would be provisioned on another physical piece of hardware, and the VM would be brought back online. This whole process would probably take less than 30 seconds and the user would pick up right where they left off. Another use of this technology would be that you could have an additional "timeout" setting. For example, maybe after 20 minutes of no activity a user's session would be disconnected (where it is still running on the server, but disconnected from the client). If the user still didn't connect back to it after an hour, the system could "suspend" the session by dumping the memory contents to disk and then free up the hardware for someone else. Whenever the user decided to connect back in, the session would be re-hydrated and the user would pick up right where they left off—regardless of how long it had been.

The clients run the "workstation" version of software. Since these VDI desktops would be based on Windows XP instead of Windows Server sessions, any software or applications would see the sessions as real workstations. You could use workstation versions of all your software.

Users have more control over their individual desktop. Again, since each user would get a full Windows XP workstation, they can customize it however they want (or as much as you let them). But as the administrator, you can be more flexible about what you let your users do since you don't have to worry about them screwing up other users.

Users can take their sessions with them when they go offline. If your backend VDI infrastructure is based on VM desktops, you can do cool things since the VM software provides a view of the hardware to users no matter what the physical hardware looks like. So in an environment where all users' desktops are provided to them as VMs, they could use centralized backend servers when they are in the office and then use laptops running VMware when they hit the road and need to run offline. There could be a one-button "take offline" option that suspends the user's session and then copies down the disk image and memory space to the laptop where it could be resumed. You could even have generic laptops that users could "check out" when traveling. Imagine VMware ACE with the flexibility of running remotely or locally, and easily switching back and forth.

Easier backups. For VM-based VDI solutions, all you would have to do is to backup the disk image files for all the user's workstations. Then if a user lost something it would be simple to "roll back" their laptop to whenever they wanted. You could even take this a step further and provide an automatic snap-shotting service that did this once an hour.

After reading through this list, you can see that VDI is cool. It combines the benefits of distributed desktops with the benefits of server-based computing. But there’s a flip side. You also get a lot of disadvantages of distributed desktops.

SBC desktop advantages over VDI desktops

  • You don’t have to manage a whole bunch of desktops
  • Less server hardware is required.
  • The software is more mature

Management. One of the original beauties of SBC is that you can run probably 50 or 75 desktop sessions on a single terminal server or Citrix Presentation Server, and that server has one instance of Windows to manage. When you go VDI, your 50 to 75 users have 50 to 75 copies of Windows XP that you need to configure, manage, patch, clean, update, and disinfect. Bummer!

More server hardware is required. Giving each user a full workstation VM or blade will require more computing resources than simply giving them a session on a terminal server. A dual processor server with 4GB of RAM can probably run 50-100 desktop sessions as a terminal server. With VMware, you're probably only looking at 15-20 Windows XP VMs.

More software is required. In addition to your OS and application software, you'll also need the VM software (from VMware or Microsoft) and you'll need some software to manage the provisioning of VMs for users. (More on this in the next section.) Of course this will also cost more money.

When does VDI make sense?

Given these comparisons, should you use VDI technology in your environment? Hopefully it’s obvious that any environment can benefit from a blended approach. Just as it makes sense to build a comprehensive application delivery solution that involves server-based computing, traditionally installed applications, and application streaming, you should think about the desktop as “just another application” that can be delivered in many ways depending on the situation.

The over-hyped example that’s always used to answer the question of “why would someone need VDI” is for remote software developers. The idea is that the remote developers can each have their own VM or bladed desktop and do whatever they want to it without affecting other users.

While I definitely think this use case is a good example, the problem is that VDI is also useful in many other ways. My fear is that always using the developer example will lead people to think that they don’t need VDI if they don’t have any remote developers.

The reality is that VDI technology is useful in any scenario where you have power users or users who need strange, non-terminal-server-compatible applications, but where the users still need the flexibility associated with traditional SBC environments. (Connecting to applications from anywhere, over slow connections, etc.)

VDI will be useful just about everywhere, albeit in a limited way. It will just be one of the multiple methods that can be used to provide a desktop to a user.

My view is VDI can play a role in nearly 100% of all companies out there, but only for probably 2-4% of the users at those companies. So yes, it’s useful, but no, no one is throwing out their SBC environments or desktop computers.

What technology makes VDI possible?

Now that we’ve looked at what VDI is and where is can be used, let’s roll up our sleeves and look at the underlying technology that makes VDI possible. At the most basic level you need two things:

  • A mechanism to run many Windows XP desktops in your datacenter.
  • A method for you users to remotely find and connect to those Windows XP desktops in the datacenter.

How to get Windows XP in the Datacenter

The first part of a VDI solution involves getting your users’ Windows XP workstations running in your datacenter. As I briefly mentioned previously in this article, there are several ways to run lots and lots of Windows XP workstations in your datacenter. You could:

  • Buy individual Windows XP desktop machines and stack them floor-to-ceiling in your datacenter.
  • Buy server blades and install a copy of Windows XP on each blade.
  • Use VMware or Microsoft Virtual Server and build huge servers that each run many VMs.

A full analysis of the pros and cons of each of these three techniques is beyond the scope of this article. Needless to say, the VM-based solution usually “wins” for most people because it’s the most cost-effective.

How users find and connect to remote Windows XP VMs

The second part of a VDI solution is the technology that’s needed for a user to find and remotely connect to a Windows XP desktop running in the datacenter. The “connecting” part of the “find and connect” is easy, since Windows XP has terminal server functionality and support for the RDP protocol built-in. (This is called “remote desktop.”) So really, any thin client that can run an RDP session can connect to a remote Windows XP desktop via RDP just as easily as it can connect to a Windows terminal server.

So while the “connecting” is easy, the “finding” part is a bit more tricky.

One option would be to build all of these Windows XP VMs in your datacenter, and then to give each one a unique hostname and/or IP address. That way each user could connect the RDP software on their client device to “their” Windows XP datacenter VM.
This is fine in theory, but would be a huge pain in real life. Specific problems include:

  • All the VMs would have to be on all the time, because if a user tried to connect to a VM that was off, the connection would fail. There is no way for the user to tell the VMware server, “Hey! Power on my VM!”
  • There is no load balancing. If you run 15 VMs on each VMware server, it would be random luck as to which users connect when.

Of course VMware has a scripting interface, and a lot of the early VDI shops wrote complex scripts and custom web connection portals that would look at an incoming connection request and then tell the VMware server to quickly power on the VM for a specific incoming user.

Over the past several months, several companies have released various products that have addressed this problem. These products can be lumped into the generic category called “Desktop Brokers” or “VDI brokers,” and they all work in basically the same way. Like the rough scripts of the early adopters, these VDI brokers receive incoming user connection requests and then route the user to a Windows XP VM that’s all ready to go for them. For the sake of space we won’t go into the details of a broker product here, but Ron Oglesby has written a fantastic overview.

These desktop broker products ensure that a user is connected to their desktop VM. Great! Now it’s just like a “regular” client-based desktop, with each user running their own Windows XP desktop, except we have some of the advantages of server-based computing.
So what’s the problem?

The problem is that if you have 100 users, now you have to manage 100 Windows XP desktop images. One thousand users means one thousand images. You get the idea. This goes back to one of the main disadvantages of VDI in general—that a Windows desktop is a Windows desktop, and if you don’t manage it, it will be a nightmare—regardless of whether it’s physical or virtual.

The Dynamic Desktop

Let’s take another step back. Remember why Terminal Server / Citrix Presentation Server SBC desktops are so nice? It’s because you only have one instance of Windows running to support 50-75 user desktops, versus 50-75 instances of Windows XP in a local desktop or VDI solution.

But does that mean that all 50-75 users are getting the exact same desktop? Of course not! We use things like roaming profiles to ensure that each user gets their own shares, printers, color schemes, and other desktop customizations.

But what about applications? Do all 50-75 terminal server desktop users see the same application list? Again, of course not! There are many ways to customize the applications that each user:

  • You can run the Citrix Program Neighborhood Agent software on your Citrix Presentation Server so that each user gets a dynamic application list in their Start menu. These icons would then launch ICA sessions to seamless published applications running on other Citrix Presentation Servers. (More on this in brianmadden.com article Doc# 275)
  • You can use an application streaming solution, such as Microsoft Softricity, Altiris SVS + AppStream, or Citrix Streaming Server, to dynamically stream applications to the server so that they are there locally and available to the users.
  • You can install the applications on the server legitimately, so they can be accessed locally via the Start menu.

These are only a few options, but the point is that even though you have 50-75 users “sharing” the same instance of Windows, each user gets their own environment. And why is that? Because a generic template desktop is just the starting point, and that desktop template is dynamically customized with roaming profiles, PN Agent application links, and streamed applications to provide the user with their own unique desktop environment.

The Dynamic VDI Desktop

By now it should be pretty obvious where we’re going. On one hand, VDI is cool in a lot of scenarios, but it’s no fun trying to manage hundreds of Windows XP desktops. On the other hand, dynamic desktops are used in SBC environments to provide custom desktops for users based on a single instance of Windows.

Now put your hands together.

The result? The dynamic VDI desktop. Imagine a VDI environment where you get both of these sets of benefits. From a technology standpoint, this means that instead of having one VMware disk image for each and every user, you could build a generic template disk image. This image could be provisioned (on demand) as users connect, and it could be dynamically customized with the user’s applications. You get a fully custom desktop for each user with the management simplicity of a SBC desktop. It’s truly the best of both worlds.

On top of that, there’s another major benefit of the dynamic VDI desktop. That’s the fact that the VMs don’t need to be created (or even running) until a user needs them. So if you have 1000 users but no more than 800 are ever running at the same time, you can scale your environment for 800 users and the system will provision and start up VMs as users need them.

Before we continue it’s important to point one thing out. These dynamic VDI advantages do not mean that the dynamic VDI Desktop will replace traditional local desktops or SBC desktops. The dynamic VDI desktop is still VDI, and complete desktop delivery solutions will still involve a blend of VDI, SBC, and traditional desktops.

The “best of both worlds” applies to scenarios where VDI already makes sense. (And in some cases, it might help you decide to go with VDI where you wouldn’t have previously due to the daunting management demands.) The idea is that you still decide between SBC, VDI, and traditional desktops, and then for the subset of desktops in your environments where you feel VDI is a fit, you can then further decide whether you want those to be static one-to-one mappings or dynamically provisioned desktops based on a shared template.

Practical Implementation of the Dynamic VDI Desktop

Now that we’ve looked at what a dynamic VDI desktop is, let’s look at how you can make this happen. The main thing you need is a way for each new user to be connected to a generic Windows XP template.

It works like this: When a user’s connection request comes in, the system makes a new VM based on a copy of the disk image for the Windows XP template, and the user would connect to that template. Once the logon is complete, the roaming profile is loaded, the other dynamic application customizations take place, and the user is ready to go.

So how do you do this? A lot of people think that because it’s simple to create a new VM in VMware, and it’s simple to copy a VMware disk image, that you can do all of this with VMware and some scripting.

Unfortunately there are several roadblocks that keep this from being so easy. Probably the biggest one is the fact that you’ll need to “boot” each new VM based on the VMware disk template. This is a problem because Windows stores things like the computer name and the IP address in the registry which is stored on the disk, so each new Windows XP VM that you boot up would have the same information.
Of course you can easily change this via a startup script that runs within the VM. You could have it check some database and then fill in the appropriate information, but if you change the computer name then you would have to add the computer into your corporate domain, and that requires a reboot! (And of course you’d want it in the corporate domain since you need to manage it and use roaming profiles.)
As you can see, the dynamic VDI desktop concept is great, but in terms of practical implementation, it’s not quite as simple “just copying VMware disk template files.”

In light of this, some people have decided that they want to build a VDI solution without a dynamic desktop—that instead they’d rather just fill a SAN with one VMware disk image for each user. While this is simple to do from a technical standpoint and avoids the problems mentioned above, can you imagine how much this would cost in terms of storage space? Even a smallish environment with 100 VDI users would require 2TB(!) if each Windows XP desktop disk image was 20GB.

This is where technologies like Ardence come in. Citrix announced that they were buying Ardence in December 2006, and the deal closed in January 2007. So now Ardence is officially a Citrix company.

A brief overview of Ardence’s technology

Let’s put all of this VDI stuff aside for a few moments and just look at Ardence’s raw technology. After that we’ll circle back and look at how Ardence enables dynamic VDI desktops.

Ardence is a software company. In the Ardence world, your computer’s disk drive is actually a disk image file sitting on a remote Ardence server. (In concept, these disk image files are similar to VMware disk image files.) Ardence calls these “vDisks.”

To have your computer use this vDisk instead of its own local hard disk, you change the boot order preference in the BIOS and configure it so that it boots from the network (or PXE boot). When the computer turns on, it boots to the network, grabs an IP address from the DHCP server, then reads some of the extended DHCP flags to find the bootstrap location. The computer then downloads a very small bootstrap which causes it to contact an Ardence server.

The Ardence server recognizes the booting computer via its MAC address and checks a configuration database to figure out which vDisk file that computer should use. The client computer then mounts the vDisk just like a normal disk and the boot process continues as normal.
Ardence calls this technology “streaming,” although personally I’m not sure that’s the best name for it. To me, “streaming” suggests that the disk content is copied down to the client device as it’s needed. I guess in some ways that’s true. But with Ardence the client computer is actually mounting a disk volume over the network. The client computer does NOT need to have any hard drive locally, and the entire remote drive image is NOT copied or cached locally.

Before we go any further, I think we need to take a deeper look at some of the technology that Ardence is using here.
At the most basic level, Ardence developed a Windows disk drive device driver. Much like Dell or hp has drivers that enable Windows to recognize their RAID controllers, Ardence has a driver that enables Windows to recognize a remote Ardence vDisk being accessed across a network.

The core of this is their custom developed UDP-based disk drive protocol. It’s UDP-based because UDP is packet-based and connectionless, which means less overhead than TCP. (The downside to this is that UDP packet delivery is not guaranteed, but in today’s switched networks,\ packet delivery is virtually guaranteed anyway, and Ardence built some custom logic into their protocol directly that re-requests dropped packets as needed.)

In concept, the Ardence protocol is kind of like iSCSI, although Ardence’s is much more efficient. Why? The Ardence protocol was developed from the start for use over a network. This is very different than iSCSI, since iSCSI takes a protocol that was developed for local access (SCSI) and wraps a TCP wrapper around it. In iSCSI transfers you’ll often find that the protocol header is larger than the payload!

Another fundamental key to the Ardence protocol is that it can endure network failures and disconnects/reconnects. This capability is built right into the Ardence disk driver and protocol.

So what does all this mean? In a typical network boot scenario (where Windows is booting from a network disk instead of a local disk), if you disconnect the network cable while Windows is booting the system will blue screen. In the Ardence world you can pull the cable during the boot process and the process just sits there. The instant you plug the cable back in, the boot process continues.

To really dig into the cool stuff, we’ll need to look at the Ardence vDisk files that are stored on a file server. There are several different ways that a vDisk can be used. The method that I’ve described so far could be called a “private” disk model, where each client computer is one-to-one mapped to an Ardence vDisk file. The Ardence disk driver running on the client computer redirects physical disk block-level read and write requests across the network to the vDisk file, and the vDisk file grows and changes as the client computer is used. Again, this is a lot like a VMware VMDK file.

However, there is another major option that Ardence provides with respect to disk files. Instead of each client computer having a one-to-one mapping to each of their own “private” disk files, you can have multiple client computers share a single “public” read-only vDisk file (with proper Microsoft OS licensing of course). In this case Ardence configures the disk file as “read only,” and all client computers get the same image.

Of course doing this requires some additional intelligence because as you can imagine, Windows would blue screen if it tried to boot to a read-only disk.

The way Ardence handles this is that they transparently redirect disk write requests to another location. Each client computer that’s sharing the same read-only vDisk ends up with a “delta” (or “write cache,” as Ardence calls it) file that holds everything that’s changed in on that disk since the computer booted up. This write cache can be stored in a specially segmented area in the client computer’s RAM, on the client computer’s hard drive, or as a separate file on a network file server.

The beauty of using these public read-only disk images is that when you reboot a client computer, the cache is cleared and the computer starts fresh. (What if you don’t want the computer to be reset to the base image on reboot? This is what the “private” disks are for that we talked about first.)

Delivering Desktops with Ardence

Even limited to that brief description of Ardence’s technology you can see how it is useful when delivering desktops to users. There are two use cases:

  • When used in concert with other VDI solutions, Ardence can facilitate “on demand” dynamic desktop images for backend VMware servers.
  • Ardence can deliver local dynamic Windows XP desktops to actual full-blown desktop PC deices, combining the strengths of “traditional desktop PCs” with centrally managed SBC environments.

Let’s explore each use case.

Using Ardence to power VDI desktops

Imagine you have a huge VMware server that is ready to host Windows XP VMs for users. Since you don’t want to consume terabytes of SAN space storing unique disk images for each user, you’ll probably want to go the “dynamic desktop” VDI route. That means that you need each of your Windows XP VMs to boot from the same template disk image, and that’s exactly where Ardence fits in.

With Ardence, you configure your VMware VMs so that they PXE boot. When a new VM starts up, it PXE boots and contacts the Ardence server. The Ardence server looks at the ID of the booting VM and then mounts a vDisk for it. Of course this vDisk can be shared between hundreds or thousands of VMs.

Since the Ardence technology is based on a device driver running in the Windows XP VM, Ardence intercepts calls for things like the domain RID and computer name, and it automatically looks up the ID of the client computer in its own database and replaces the generic template computer name and RID with the real ones for that device.

So with Ardence, it’s possible to enable the concept of the dynamic VDI desktop.

Another interesting use of Ardence in VDI environments is that you can use it to stream the VMware host software that runs all of the Windows XP VMs. For instance, in many cases you will probably only be able to run 15-30 Windows XP VMs per physical server host. If you have several hundred VDI users, suddenly you’re talking about 20 or more VMware host servers. How are you managing those?
With Ardence, you configure your host servers to PXE boot a VMware image. Once the host is online, it can add itself to the pool and immediately begin hosting Windows XP VDI VMs for users (which will also be PXE booted from Ardence Disk Images). In some ways this is the “most” virtual environment, since you completely separate all physical hardware from OS execution.

 
 




Our Books


Comments

Kata Tank wrote Make a good use of technology
on Thu, Mar 15 2007 6:37 AM Link To This Comment
When you only have a hammer, everything looks like a nail.

Each technology has both good and bad side.
Each of them could find a place in my infrastructure to serve some of my needs and bring benefits.
None was abble to serve all need.

That's why I like the idea of VDI/DDI and others types of initiative. It brings technologies together to build solution ! That's the only thing I need (solution, not technology)...
Robert Murray wrote Using Ardence to power VDI Citrix servers?
on Thu, Mar 15 2007 9:34 AM Link To This Comment
What if we changed the last section (Using Ardence to power VDI desktops) to start "Imagine you have a huge VMware server that is ready to host Windows Severs VMs running Citrix..." That could get interesting!
Jeroen van de Kamp wrote What about running Ardence in WMware/VPC on the local workstation?
on Thu, Mar 15 2007 10:30 AM Link To This Comment
VDI is widely accepted as a solution for developers to facilitate a centralized and managed development environment. However, when I was in discussion with Rodney Medina and a customer about Ardence and VDI, Rodney made a thought provoking suggestion.
 
The interesting thing is that Ardence combined with VMware player can be a much cheaper alternative to VDI, especially in LAN environments. For instance, just install the free VMware player on the managed workstation, and run de development OS from Ardence. The main drawback of running a virtualized OS on the workstation itself is image maintenance, storage and backup. The briljant thing with using Ardence for the local virtual machine, it immediately fixes all those issues. Of course is this solution not suited to access the Dev workstations through WAN scenarios. But, where there is no such requirement, this seems like a very effective solution to facilitate a centrally managed and stored develop environment. 
Dan Shappir wrote VDI vs. SBC
on Thu, Mar 15 2007 12:03 PM Link To This Comment
First, you can find another white-paper on VDI, comparing it to SBC, at http://www.dabcc.com/downloadfile.aspx?id=306.
 
I chuckled when I read in the article that one of the advantage of VDI over SBC is "Better performance (from the users’ standpoint)". While it is technically correct that you could be very particular regarding how you partition the VM server, in practice make no mistake: you will get far fewer sessions on a VM than on an equivalent TS. Depending on the hardware and software mix, a VDI server will be able to effectively support about a third of the sessions than a TS on the same box.
 
Which bring me to SBC's currently most significant advantage over VDI: cost.
 
As was mentioned in the article, given that you will need three times the servers for VDI, server hardware cost goes way up compared to SBC. In addition VMware ESX is roughly 2.5 times more expensive than Windows 2003 so the server software cost would be 3 x 2.5 = 7.5 times more expensive. But the real kicker is that Windows XP is much more expensive than a TSCAL. Roughly twice more expensive I believe. So a pure VDI solution could easily be 3-4 times as expensive as the equivalent SBC solution.
 
Things get even worse if you use the current Citrix VDI solution because you would need to purchase additional Citrix servers as well as an additional Citrix license for each user on top of the Windows XP license.
 
Because of all this I believe that a proper solution would utilize both technologies in tandem: SBC where you can and VDI where you must. BTW, I think that a functionality I call "Session Virtualization" coupled with SBC could provide many of the benefits of VDI. Read more about it on my blog.
Dan
Nick Fields wrote RE: Using Ardence to power VDI Citrix servers?
on Fri, Mar 16 2007 10:00 AM Link To This Comment
ORIGINAL: robertmurray

What if we changed the last section (Using Ardence to power VDI desktops) to start "Imagine you have a huge VMware server that is ready to host Windows Severs VMs running Citrix..." That could get interesting!

 
Indeed, how deep down the rabbit hole do you want to go...  ;)
Allan Harder wrote RE: VDI vs. SBC
on Fri, Mar 16 2007 4:26 PM Link To This Comment
Session Virtualization? Very interesting. How about "Roaming Virtual Sessions"?
Imagine going into your Access Suite Console and dragging user sessions from one Citrix Server and dropping it onto another.
The user screens grey with a notice for a time as the entire session is ported over to another server.
This could free up a server for maintainance, upgrades, or whatever.
 
I suspect Citrix and/or Microsoft are already working on this one. Perhaps in 2 years? Now that would be useful.
Patrick Rouse wrote RE: VDI vs. SBC
on Sat, Mar 17 2007 11:33 AM Link To This Comment
Citrix "current" VDI Solution is basically a Band AID until Project Trinity is done, and no one in their right mind would purchase this, but rather it's used by people with Citrix Licenses who have need to isolate some apps into XP VMs.
 
There are currently better VDI Implementations, i.e. Provision Networks VAS and Propero.
 
http://www.msterminalservices.org/articles/Virtual-Desktop-Infrastructure-Overview.html
 
 
Dan Shappir wrote RE: VDI vs. SBC
on Sat, Mar 17 2007 3:02 PM Link To This Comment
Agreed. But it doesn't change the basic math: VDI is more expensive than the equivalent SBC solution.
See for example this VMware white paper about Microsoft licensing terms: http://www.vmware.com/solutions/whitepapers/msoft_licensing_wp.html
Microsoft doesn't have virtual desktop offerings, so they are denying it to customers
Patrick Rouse wrote RE: VDI vs. SBC
on Sat, Mar 17 2007 4:16 PM Link To This Comment
I think what you're missing is that VDI is not an SBC replacment (at this point in time), but rather a SBC supplement. VDI is targeting clients that can't or won't use SBC for some or all of their applications.
Dan Shappir wrote RE: VDI vs. SBC
on Sat, Mar 17 2007 4:32 PM Link To This Comment
For what it's worth, that is exactly the point I made:
I believe that a proper solution would utilize both technologies in tandem: SBC where you can and VDI where you must
Dan
Dan Shappir wrote RE: VDI vs. SBC
on Sat, Mar 17 2007 5:15 PM Link To This Comment
I agree that the ability to move sessions this way would be very cool, but that is not the reason I proposed it. Read more at my [link=http: Dan
Massimo Re Ferre' wrote Good one....
on Mon, Mar 19 2007 4:34 PM Link To This Comment
Well done Brian. Enjoyed it. Very well balanced.
 
FYI I am trying to maintain a list of these brokers here at: http://www.it20.info/misc/brokers.htm 
 
I know I know it should be updated since we now have the Citrix Desktop Server license etc etc ... I just need to find the time to do that ....
 
Massimo.
Stefan Holzwarth wrote What about a technology like Virtuosso for Desktops
on Thu, Mar 22 2007 3:28 AM Link To This Comment
Since Windows XP isn't available at the moment with virtuosso this is not an option NOW.
But through virtualisation of the Windows api it is possible to run many similar Desktops on the same infrastructure including disk! No need of complicated multi layer applications/drivers...
I do not see a better way.
 
Regards Spex
(BTW I do not sell this product)
VirtClientGuru wrote Nice perspective, but incomplete
on Thu, Mar 22 2007 10:48 AM Link To This Comment
Two points:
1.  If you're going to discuss "what's the best solution" then you need to include the entire virtual client space.  So where is the discussion of ClearCube Blade PCs - or HP's?  They seem to be able to do everything a VM can do, and probably for a lot less money when you talk about REAL production environments, not test labs. 
 
2.  The article mentions of 50-75 VMs on a server, but when I go to VMWare's website the only VDI success stories I see are 30 or less.  Sounds more like theory than reality.  The only class of users where this kind of load could possibly make sense would be a light user, which is SBC's sweet spot.  So I guess VDI really only makes sence for SBC-type users where apps don't work in that environment.
 
From what I've seen, when you factor in all the software licensing costs (VMWare, Windows, and RDLs) , if you are at 20 users per server for VDI your cost is around $2k/seat.  Who's gonna pay that much?  Very few I would guess.
 
See lots of people looking at VDI but not very many deploying it beyond niche situations.  Same can be said for blade PCs for sure.  But it's because both of them rely on RDP (or ICA), they can't be true desktop replacements.  Somebody has to come up with better connection software than those two if either of these is going to break out.
stucco wrote Network overhead
on Fri, Mar 23 2007 5:49 PM Link To This Comment
Is network overhead going to be an issue?   Since you are running your entire disk read-writes over the network won't this be a bottleneck as you start adding more devices onto it?
steve aston wrote User specific desktops on Ardence
on Wed, Mar 28 2007 11:57 AM Link To This Comment
I understand the concept behind having the public image for your virtual desktop but are you saying that with the Ardence dynamic deskop the user gets his or her own settings for that desktop as per roaming profiles?  You do address the problem in the paragraph after "put your hands together" where you talk about "the image could be dynamically customised with the users applications ..." but does the Ardence thing do that? Just curious really.... 
Roger Gransier wrote Desktop Broker
on Tue, Apr 10 2007 2:32 AM Link To This Comment
What ever happened to Citrix Desktop Broker? I only find a handful of documents of it, and don't believe that it is a real productive solution.
What do you think?
Erik McCloud wrote Real World SBC vs VDI
on Mon, Apr 16 2007 1:01 PM Link To This Comment
How about comparing real world Citrix SBC vs VDI. What I am saying is Citrix's reason to be in a large environment isn't to deliver a desktop to the end user. Everywhere I've used it and seen it used it is a seemless application. Because the real benefit to using Citrix is the localization of the app to the backend data or database eliminating performance costly calls across a WAN. How cost effective is VDI for that?

And since VDI doesn't (yet) eliminate the need for a local OS, then aren't you just complicating your support structure? So you'll still end up with WinXP, WinXP MCE, or say Linux. You still need the local support staff to run it, so what did you save?

Don't get me wrong, I think people could be on to something here, but I don't see the "Citrix Killer" that some make it to be. And if you're not eliminating your Citrix costs, what's the point?
Dan Shappir wrote RE: Real World SBC vs VDI
on Wed, Apr 18 2007 2:59 AM Link To This Comment
First, VDI desktops will most likely be accessed from thin clients (CE WBT or LTC). One of the main selling points of thin clients is that they require significantly less support than a standard PC. So while technically there will still be a local OS you will require far less local support.
 
Second, you can use published seamless applications from a VDI, provided that your VDI broker provides this functionality. In this context VDI is an alternative to virtualizing/isolating applications on SBC. This means you will use it for applications that are incompatible with SBC out-of-the-box.
 
Dan
http://ericomguy.blogspot.com
Alan Osborne wrote Microsoft's roadmap for this space
on Thu, May 10 2007 7:14 PM Link To This Comment

It will be interesting to see how Microsoft's acquisition of Softricity will play out in this space. The reasons that people are looking at VDI in the first place is because of some of the challenges of SBC computing. Namely, providing users with a degree of flexibility in terms of their desktop and customizations, application compatibility issues, security, and software conflicts.

I have found that the majority of users are able to customize their Citrix/TS desktops etc sufficiently. There are very few customizations that could be deemed "essential" that users cannot currrently make in a TS/Citrix environment. Longhorn will only improve this flexibility, especially with the improvements to roaming profiles and Intellimirror integration where we can get all user data off of the TS/Citrix server.

SBC solutions provide all the benefits of VDI solutions in comparison to traditional desktops (i.e. data containment, server class hardware, client device independence, ease of management, etc). By virtualizing the Citrix servers on a VMWare backend, getting all user data off of the Citrix VMs (and onto a LUN on a SAN), and using VMWare snapshots and VDMK backup tools, you also reap the benefits of fewer hardware compatibility issues (all VMs have identical hardware), VM portability, the ability to VMotion Citrix servers on the fly, the smallest amount of server hardware possible with the best utilization of resources, and the ability for users to access their remote sessions from anywhere at anytime.

With Longhorn, security improvements make TS/Citrix implementations that much safer. Also, TS will be available as a Server Core in Longhorn shortly after it goes RTM, making the OS much leaner with corresponding performance gains. Finally, Citrix roadmap includes huge improvements for graphically intensive applications which will make such applications more feasible in SBC.

So what about application compatibility issues?

That's where Microsoft's acquisition of Softricity is interesting. Now we have a Microsoft solution for those problematic applications that just won't run under Terminal Services, while still keeping to our SBC delivery platform. These problem apps can be streamed and virtualized within the TS/Citrix session and run side by side with other apps that in the past might cause conflicts. Also, you can run "desktop" versions of software that wouldn't otherwise play well with TS/Citrix.

Given the small number of issues to deal with and Microsoft's apparent roadmap, I would expect VDI to be a niche rather than a major force and to see the pendulum swing back in favor of SBC solutions with Citrix, Microsoft, and (if they play their cards right) VMWare coming out as the big winners.

I would expect to see Linux become the thin client OS of choice on the edge as well. Delivery of a PXE boot Linux kernel with a local web browser, plugins for Flash and Java support, and support for client side device redirection is just too tempting...

Guest wrote ClearCube's VDI software
on Mon, Sep 17 2007 4:18 AM Link To This Comment

Brian has mentioned ClearCube in previous articles as a Blade provider, though they have been building a full fledged VDI delivery SW infrastructure as well, which has recently won the VMWORLD 2007 award for best Desktop Virtualization Product. Until recently, I wasn't aware that ClearCube had a significant offering in the software space, it had been a while since I looked at them. Recently my employer (a bank) has started evaluating ClearCub software only, to run on our existing IBM Blade hardware.

Their software appears to combine a connection broker with virtual machine and thinclient management capabilities and it also supports the new Teradici chips as well as good old Linux/XPe thinclient devices. Seems pretty interesting and a very effective alternative to something like Citrix, which has just gotten way too complicated.

It seems to me like you have either the behemoth applications like Citrix, that were really built for a non-virtualized world, or you have very simplistic utility type connection brokers from a variety of small startups. Citrix is only now being retrofitted to work with Virtualization. And most of the smaller connection brokers are well-intentioned, but too simplistic and early in their lifecycle from a stability standpoint. In our evaluation of clearcube's sentral software, it seems to provide a happy medium between functionality/completeness on the one hand and relative simplicity on the other.

 

Guest wrote Re: Network overhead
on Fri, Oct 12 2007 7:30 AM Link To This Comment
Make a vlan for it

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.