VDI, or Virtual Desktop Infrastructure, is quickly entering the buzzword danger zone. At the most basic level, VDI technology is a new method for delivering desktops to users. Of course users have been using desktops for years, at first running locally on their own PCs, and more recently by accessing remote server-based computing (SBC) desktops running on Microsoft terminal servers or Citrix Presentation Servers.
Now that various VDI technologies have hit the market, peoples’ reactions are all over the place. Some people are talking about how VDI will replace or compete with SBC and traditional technologies. In this article I’ll explain why this isn’t the case, and how all three technologies (VDI, SBC, and traditional desktops) can be used together to provide a holistic desktop delivery solution for a company of any size.
I’ll also explore the technology that makes VDI a reality and discuss some of the roadblocks that may be encountered along the way. I’ll talk about the emergence and importance of a concept known as the “dynamic desktop,” and why this is needed for a “true” VDI solution.
Finally I’ll provide a quick overview of Citrix’s Ardence solution and describe how it can enable organizations to truly realize the “on demand” desktop, whether it’s VDI-based or traditional PC-based.
What is VDI?
The idea is simple. Instead of giving a user a local PC running a local copy of Windows XP, you run the Windows XP (or Vista) desktop software in your datacenter. Then your users remotely connect to and control their own instance of their Windows desktop in a one-to-one manner from their own client device.
In doing so, the user can use any client device they want to access “their desktop.” If you replace a user’s desktop with a thin client that automatically connects to a Windows XP machine in the datacenter when the client is powered on, there’s a good chance that the user wouldn’t even know they were using a remote desktop.
In reality, no one would implement this by stacking Windows XP desktop computers floor-to-ceiling in their datacenter. Instead, VDI is typically implemented by building huge VMware servers running many Windows XP VMs or by using high-density blade servers running Windows XP.
Why use VDI?
So why would anyone do this? To understand it, we first have to look at the alternatives. The role of an IT department is to provide applications for users. In order to use an application, a user needs a desktop. (Be it a Windows desktop, a browser window, or something else, there has to be a backdrop that has some method for users to select and launch applications.)
VDI is about providing desktops to users. Before VDI, there were two other ways to provide desktops to users:
- The old way, with each user running a local copy of Windows XP on their own local desktop or laptop computer. (Hereinafter “local desktop”)
- The server-based computing (SBC) way, with each user connecting to a remote desktop session running on a Microsoft terminal server and/or a Citrix Presentation Server. (Hereinafter “SBC desktop”)
The VDI approach adds a third option to this mix. Therefore in order to answer the question of why anyone would want to use the VDI option, we have to look at how the VDI option “competes” against a local desktop or SBC desktop solution.
VDI versus local desktops
When comparing VDI to a local desktop solution, you’ll see that the VDI option lets the users enjoy many of the benefits of traditional local desktops while also adding some new benefits.
VDI advantages over local desktops
- Data containment
- Desktops are running on server-class hardware
- Client device independence
- Ease of management
Data containment. Since a VDI solution means that users’ desktops are running on servers in a datacenter, the C: drive of each desktop is also running in that datacenter. That means that all data is automatically contained within the walls of the datacenter.
Desktops run on server-class hardware. Since desktop computers are distributed throughout an organization, they don’t have the same redundancy as server-class hardware. A single power supply, drive, or memory failure can take down a desktop computer. Of course the same also applies to servers. However, since there are many fewer servers in an organization than desktops, it’s okay from a financial and risk standpoint to spend money on redundant power, RAID, memory, and other technologies to ensure that server hardware doesn’t have the same potential hardware failures.
Client device independence. In a VDI environment, the ultimate “client” device is essentially nothing more than a screen, a mouse, a keyboard, and some mechanism (RDP, ICA, etc.) for connecting to remote Windows XP desktops. This means that the client device can be just about anything—a thin client, a Mac, a laptop, or a UNIX workstation.
Ease of management. If you have to manage 1000 desktops, which would you rather manage: 1000 physical desktops scattered all over the place, or 1000 desktops contained in a single datacenter? The simple fact that the client “workstations” are all in the datacenter can have a profound effect on management, patching, backups, provisioning, etc.
Local desktop advantages over VDI desktops
Of course VDI is not for everyone, and certainly there are a several advantages that the “traditional” local desktop model has over VDI architectures.
- Local desktops can be used offline (i.e. laptops)
- No single points of failure
Local desktops can be used offline. This is probably one of the biggest downsides to VDI. In a VDI environment with everything running in the datacenter, if the network link goes down between the client device and the datacenter, or if the user wants to be mobile with a laptop, then the whole VDI concept breaks down.
It's worth noting that there are some novel solutions to this. For example, you could install VMware on a laptop and then copy the user’s VMware disk image from the datacenter to that laptop for them to use on the road, but that introduces its own challenges that are beyond the scope of this article.
Local desktops to not have a single point of failure. Even though server hardware is very redundant, if something happens to your back-end servers in a VDI environment, you’re totally out of luck. Compare that to a traditional desktop environment. If one PC fails, that user is out of luck, but the other users can still work.
VDI versus SBC desktops
The other option for providing desktops to users is via server-based computing. This is kind of interesting now because in many ways this was the first VDI solution, and it’s been in place for over ten years. In fact, Citrix didn’t even introduce seamless application publishing until 1999, so anything “SBC” before that was full remote desktops. Of course we didn’t know to call it “VDI” back then, but that’s what it was.
However, today’s Windows XP-based VDI is very different than today’s terminal server / Citrix-based desktop publishing, even though they both fundamentally solve the same business goal (desktops to users).
Let’s compare these two technologies and look at where each has an edge.
VDI advantages over SBC desktops
- Better performance (from the users’ standpoint)
- No application compatibility issues
- Better / easier security
- You can "suspend" individual VMs and then move them from server to server
- The clients run the "workstation" version of software
- Users have more control over their individual desktop
- Users can take their sessions with them when they go offline
- Easier backups
Better performance. (In theory, anyway.) Any performance gains might depend on whether your VDI Windows XP desktop backend is made up of blades or regular servers running VMware. Obviously if you only have one user (or a handful of users) on each blade, then your users can run bigger and more powerful applications without negatively affecting as many users as in a terminal server environment. If you're using VM software to cut a huge server into dozens of Windows XP VMs, then you will have the ability to partition the resources for each VM in a different way than regular terminal server or Citrix SBC sessions.
No application compatibility issues. With VDI, each backend Windows XP desktop is a full standalone workstation. This means that you don't have to worry about applications that are not terminal services-compatible.
Better / easier security. Since each user would have his own standalone Windows XP desktop, you don’t have to worry as much about locking down each user's session. If a user screws something up, he won't affect other users.
You can "suspend" individual VMs and then move them from server to server. If your backend Windows XP VDI infrastructure was based on VMware, you could have some cool flexibility for doing maintenance. Imagine a scenario where you could hit a button in a management console to "move" a user to another server. Maybe the user would receive a popup box that said "Please wait a moment." Then the server would dump the memory contents of the Windows XP desktop VM to a disk, a VM would be provisioned on another physical piece of hardware, and the VM would be brought back online. This whole process would probably take less than 30 seconds and the user would pick up right where they left off. Another use of this technology would be that you could have an additional "timeout" setting. For example, maybe after 20 minutes of no activity a user's session would be disconnected (where it is still running on the server, but disconnected from the client). If the user still didn't connect back to it after an hour, the system could "suspend" the session by dumping the memory contents to disk and then free up the hardware for someone else. Whenever the user decided to connect back in, the session would be re-hydrated and the user would pick up right where they left off—regardless of how long it had been.
The clients run the "workstation" version of software. Since these VDI desktops would be based on Windows XP instead of Windows Server sessions, any software or applications would see the sessions as real workstations. You could use workstation versions of all your software.
Users have more control over their individual desktop. Again, since each user would get a full Windows XP workstation, they can customize it however they want (or as much as you let them). But as the administrator, you can be more flexible about what you let your users do since you don't have to worry about them screwing up other users.
Users can take their sessions with them when they go offline. If your backend VDI infrastructure is based on VM desktops, you can do cool things since the VM software provides a view of the hardware to users no matter what the physical hardware looks like. So in an environment where all users' desktops are provided to them as VMs, they could use centralized backend servers when they are in the office and then use laptops running VMware when they hit the road and need to run offline. There could be a one-button "take offline" option that suspends the user's session and then copies down the disk image and memory space to the laptop where it could be resumed. You could even have generic laptops that users could "check out" when traveling. Imagine VMware ACE with the flexibility of running remotely or locally, and easily switching back and forth.
Easier backups. For VM-based VDI solutions, all you would have to do is to backup the disk image files for all the user's workstations. Then if a user lost something it would be simple to "roll back" their laptop to whenever they wanted. You could even take this a step further and provide an automatic snap-shotting service that did this once an hour.
After reading through this list, you can see that VDI is cool. It combines the benefits of distributed desktops with the benefits of server-based computing. But there’s a flip side. You also get a lot of disadvantages of distributed desktops.
SBC desktop advantages over VDI desktops
- You don’t have to manage a whole bunch of desktops
- Less server hardware is required.
- The software is more mature
Management. One of the original beauties of SBC is that you can run probably 50 or 75 desktop sessions on a single terminal server or Citrix Presentation Server, and that server has one instance of Windows to manage. When you go VDI, your 50 to 75 users have 50 to 75 copies of Windows XP that you need to configure, manage, patch, clean, update, and disinfect. Bummer!
More server hardware is required. Giving each user a full workstation VM or blade will require more computing resources than simply giving them a session on a terminal server. A dual processor server with 4GB of RAM can probably run 50-100 desktop sessions as a terminal server. With VMware, you're probably only looking at 15-20 Windows XP VMs.
More software is required. In addition to your OS and application software, you'll also need the VM software (from VMware or Microsoft) and you'll need some software to manage the provisioning of VMs for users. (More on this in the next section.) Of course this will also cost more money.
When does VDI make sense?
Given these comparisons, should you use VDI technology in your environment? Hopefully it’s obvious that any environment can benefit from a blended approach. Just as it makes sense to build a comprehensive application delivery solution that involves server-based computing, traditionally installed applications, and application streaming, you should think about the desktop as “just another application” that can be delivered in many ways depending on the situation.
The over-hyped example that’s always used to answer the question of “why would someone need VDI” is for remote software developers. The idea is that the remote developers can each have their own VM or bladed desktop and do whatever they want to it without affecting other users.
While I definitely think this use case is a good example, the problem is that VDI is also useful in many other ways. My fear is that always using the developer example will lead people to think that they don’t need VDI if they don’t have any remote developers.
The reality is that VDI technology is useful in any scenario where you have power users or users who need strange, non-terminal-server-compatible applications, but where the users still need the flexibility associated with traditional SBC environments. (Connecting to applications from anywhere, over slow connections, etc.)
VDI will be useful just about everywhere, albeit in a limited way. It will just be one of the multiple methods that can be used to provide a desktop to a user.
My view is VDI can play a role in nearly 100% of all companies out there, but only for probably 2-4% of the users at those companies. So yes, it’s useful, but no, no one is throwing out their SBC environments or desktop computers.
What technology makes VDI possible?
Now that we’ve looked at what VDI is and where is can be used, let’s roll up our sleeves and look at the underlying technology that makes VDI possible.
At the most basic level you need two things:
- A mechanism to run many Windows XP desktops in your datacenter.
- A method for you users to remotely find and connect to those Windows XP desktops in the datacenter.
How to get Windows XP in the Datacenter
The first part of a VDI solution involves getting your users’ Windows XP workstations running in your datacenter. As I briefly mentioned previously in this article, there are several ways to run lots and lots of Windows XP workstations in your datacenter. You could:
- Buy individual Windows XP desktop machines and stack them floor-to-ceiling in your datacenter.
- Buy server blades and install a copy of Windows XP on each blade.
- Use VMware or Microsoft Virtual Server and build huge servers that each run many VMs.
A full analysis of the pros and cons of each of these three techniques is beyond the scope of this article. Needless to say, the VM-based solution usually “wins” for most people because it’s the most cost-effective.
How users find and connect to remote Windows XP VMs
The second part of a VDI solution is the technology that’s needed for a user to find and remotely connect to a Windows XP desktop running in the datacenter. The “connecting” part of the “find and connect” is easy, since Windows XP has terminal server functionality and support for the RDP protocol built-in. (This is called “remote desktop.”) So really, any thin client that can run an RDP session can connect to a remote Windows XP desktop via RDP just as easily as it can connect to a Windows terminal server.
So while the “connecting” is easy, the “finding” part is a bit more tricky.
One option would be to build all of these Windows XP VMs in your datacenter, and then to give each one a unique hostname and/or IP address. That way each user could connect the RDP software on their client device to “their” Windows XP datacenter VM.
This is fine in theory, but would be a huge pain in real life. Specific problems include:
- All the VMs would have to be on all the time, because if a user tried to connect to a VM that was off, the connection would fail. There is no way for the user to tell the VMware server, “Hey! Power on my VM!”
- There is no load balancing. If you run 15 VMs on each VMware server, it would be random luck as to which users connect when.
Of course VMware has a scripting interface, and a lot of the early VDI shops wrote complex scripts and custom web connection portals that would look at an incoming connection request and then tell the VMware server to quickly power on the VM for a specific incoming user.
Over the past several months, several companies have released various products that have addressed this problem. These products can be lumped into the generic category called “Desktop Brokers” or “VDI brokers,” and they all work in basically the same way. Like the rough scripts of the early adopters, these VDI brokers receive incoming user connection requests and then route the user to a Windows XP VM that’s all ready to go for them. For the sake of space we won’t go into the details of a broker product here, but Ron Oglesby has written a fantastic overview.
These desktop broker products ensure that a user is connected to their desktop VM. Great! Now it’s just like a “regular” client-based desktop, with each user running their own Windows XP desktop, except we have some of the advantages of server-based computing.
So what’s the problem?
The problem is that if you have 100 users, now you have to manage 100 Windows XP desktop images. One thousand users means one thousand images. You get the idea. This goes back to one of the main disadvantages of VDI in general—that a Windows desktop is a Windows desktop, and if you don’t manage it, it will be a nightmare—regardless of whether it’s physical or virtual.
The Dynamic Desktop
Let’s take another step back. Remember why Terminal Server / Citrix Presentation Server SBC desktops are so nice? It’s because you only have one instance of Windows running to support 50-75 user desktops, versus 50-75 instances of Windows XP in a local desktop or VDI solution.
But does that mean that all 50-75 users are getting the exact same desktop? Of course not! We use things like roaming profiles to ensure that each user gets their own shares, printers, color schemes, and other desktop customizations.
But what about applications? Do all 50-75 terminal server desktop users see the same application list? Again, of course not! There are many ways to customize the applications that each user:
- You can run the Citrix Program Neighborhood Agent software on your Citrix Presentation Server so that each user gets a dynamic application list in their Start menu. These icons would then launch ICA sessions to seamless published applications running on other Citrix Presentation Servers. (More on this in brianmadden.com article Doc# 275)
- You can use an application streaming solution, such as Microsoft Softricity, Altiris SVS + AppStream, or Citrix Streaming Server, to dynamically stream applications to the server so that they are there locally and available to the users.
- You can install the applications on the server legitimately, so they can be accessed locally via the Start menu.
These are only a few options, but the point is that even though you have 50-75 users “sharing” the same instance of Windows, each user gets their own environment. And why is that? Because a generic template desktop is just the starting point, and that desktop template is dynamically customized with roaming profiles, PN Agent application links, and streamed applications to provide the user with their own unique desktop environment.
The Dynamic VDI Desktop
By now it should be pretty obvious where we’re going. On one hand, VDI is cool in a lot of scenarios, but it’s no fun trying to manage hundreds of Windows XP desktops. On the other hand, dynamic desktops are used in SBC environments to provide custom desktops for users based on a single instance of Windows.
Now put your hands together.
The result? The dynamic VDI desktop. Imagine a VDI environment where you get both of these sets of benefits. From a technology standpoint, this means that instead of having one VMware disk image for each and every user, you could build a generic template disk image. This image could be provisioned (on demand) as users connect, and it could be dynamically customized with the user’s applications. You get a fully custom desktop for each user with the management simplicity of a SBC desktop. It’s truly the best of both worlds.
On top of that, there’s another major benefit of the dynamic VDI desktop. That’s the fact that the VMs don’t need to be created (or even running) until a user needs them. So if you have 1000 users but no more than 800 are ever running at the same time, you can scale your environment for 800 users and the system will provision and start up VMs as users need them.
Before we continue it’s important to point one thing out. These dynamic VDI advantages do not mean that the dynamic VDI Desktop will replace traditional local desktops or SBC desktops. The dynamic VDI desktop is still VDI, and complete desktop delivery solutions will still involve a blend of VDI, SBC, and traditional desktops.
The “best of both worlds” applies to scenarios where VDI already makes sense. (And in some cases, it might help you decide to go with VDI where you wouldn’t have previously due to the daunting management demands.) The idea is that you still decide between SBC, VDI, and traditional desktops, and then for the subset of desktops in your environments where you feel VDI is a fit, you can then further decide whether you want those to be static one-to-one mappings or dynamically provisioned desktops based on a shared template.
Practical Implementation of the Dynamic VDI Desktop
Now that we’ve looked at what a dynamic VDI desktop is, let’s look at how you can make this happen. The main thing you need is a way for each new user to be connected to a generic Windows XP template.
It works like this: When a user’s connection request comes in, the system makes a new VM based on a copy of the disk image for the Windows XP template, and the user would connect to that template. Once the logon is complete, the roaming profile is loaded, the other dynamic application customizations take place, and the user is ready to go.
So how do you do this? A lot of people think that because it’s simple to create a new VM in VMware, and it’s simple to copy a VMware disk image, that you can do all of this with VMware and some scripting.
Unfortunately there are several roadblocks that keep this from being so easy. Probably the biggest one is the fact that you’ll need to “boot” each new VM based on the VMware disk template. This is a problem because Windows stores things like the computer name and the IP address in the registry which is stored on the disk, so each new Windows XP VM that you boot up would have the same information.
Of course you can easily change this via a startup script that runs within the VM. You could have it check some database and then fill in the appropriate information, but if you change the computer name then you would have to add the computer into your corporate domain, and that requires a reboot! (And of course you’d want it in the corporate domain since you need to manage it and use roaming profiles.)
As you can see, the dynamic VDI desktop concept is great, but in terms of practical implementation, it’s not quite as simple “just copying VMware disk template files.”
In light of this, some people have decided that they want to build a VDI solution without a dynamic desktop—that instead they’d rather just fill a SAN with one VMware disk image for each user. While this is simple to do from a technical standpoint and avoids the problems mentioned above, can you imagine how much this would cost in terms of storage space? Even a smallish environment with 100 VDI users would require 2TB(!) if each Windows XP desktop disk image was 20GB.
This is where technologies like Ardence come in. Citrix announced that they were buying Ardence in December 2006, and the deal closed in January 2007. So now Ardence is officially a Citrix company.
A brief overview of Ardence’s technology
Let’s put all of this VDI stuff aside for a few moments and just look at Ardence’s raw technology. After that we’ll circle back and look at how Ardence enables dynamic VDI desktops.
Ardence is a software company. In the Ardence world, your computer’s disk drive is actually a disk image file sitting on a remote Ardence server. (In concept, these disk image files are similar to VMware disk image files.) Ardence calls these “vDisks.”
To have your computer use this vDisk instead of its own local hard disk, you change the boot order preference in the BIOS and configure it so that it boots from the network (or PXE boot). When the computer turns on, it boots to the network, grabs an IP address from the DHCP server, then reads some of the extended DHCP flags to find the bootstrap location. The computer then downloads a very small bootstrap which causes it to contact an Ardence server.
The Ardence server recognizes the booting computer via its MAC address and checks a configuration database to figure out which vDisk file that computer should use. The client computer then mounts the vDisk just like a normal disk and the boot process continues as normal.
Ardence calls this technology “streaming,” although personally I’m not sure that’s the best name for it. To me, “streaming” suggests that the disk content is copied down to the client device as it’s needed. I guess in some ways that’s true. But with Ardence the client computer is actually mounting a disk volume over the network. The client computer does NOT need to have any hard drive locally, and the entire remote drive image is NOT copied or cached locally.
Before we go any further, I think we need to take a deeper look at some of the technology that Ardence is using here.
At the most basic level, Ardence developed a Windows disk drive device driver. Much like Dell or hp has drivers that enable Windows to recognize their RAID controllers, Ardence has a driver that enables Windows to recognize a remote Ardence vDisk being accessed across a network.
The core of this is their custom developed UDP-based disk drive protocol. It’s UDP-based because UDP is packet-based and connectionless, which means less overhead than TCP. (The downside to this is that UDP packet delivery is not guaranteed, but in today’s switched networks,\ packet delivery is virtually guaranteed anyway, and Ardence built some custom logic into their protocol directly that re-requests dropped packets as needed.)
In concept, the Ardence protocol is kind of like iSCSI, although Ardence’s is much more efficient. Why? The Ardence protocol was developed from the start for use over a network. This is very different than iSCSI, since iSCSI takes a protocol that was developed for local access (SCSI) and wraps a TCP wrapper around it. In iSCSI transfers you’ll often find that the protocol header is larger than the payload!
Another fundamental key to the Ardence protocol is that it can endure network failures and disconnects/reconnects. This capability is built right into the Ardence disk driver and protocol.
So what does all this mean? In a typical network boot scenario (where Windows is booting from a network disk instead of a local disk), if you disconnect the network cable while Windows is booting the system will blue screen. In the Ardence world you can pull the cable during the boot process and the process just sits there. The instant you plug the cable back in, the boot process continues.
To really dig into the cool stuff, we’ll need to look at the Ardence vDisk files that are stored on a file server. There are several different ways that a vDisk can be used. The method that I’ve described so far could be called a “private” disk model, where each client computer is one-to-one mapped to an Ardence vDisk file. The Ardence disk driver running on the client computer redirects physical disk block-level read and write requests across the network to the vDisk file, and the vDisk file grows and changes as the client computer is used. Again, this is a lot like a VMware VMDK file.
However, there is another major option that Ardence provides with respect to disk files. Instead of each client computer having a one-to-one mapping to each of their own “private” disk files, you can have multiple client computers share a single “public” read-only vDisk file (with proper Microsoft OS licensing of course). In this case Ardence configures the disk file as “read only,” and all client computers get the same image.
Of course doing this requires some additional intelligence because as you can imagine, Windows would blue screen if it tried to boot to a read-only disk.
The way Ardence handles this is that they transparently redirect disk write requests to another location. Each client computer that’s sharing the same read-only vDisk ends up with a “delta” (or “write cache,” as Ardence calls it) file that holds everything that’s changed in on that disk since the computer booted up. This write cache can be stored in a specially segmented area in the client computer’s RAM, on the client computer’s hard drive, or as a separate file on a network file server.
The beauty of using these public read-only disk images is that when you reboot a client computer, the cache is cleared and the computer starts fresh. (What if you don’t want the computer to be reset to the base image on reboot? This is what the “private” disks are for that we talked about first.)
Delivering Desktops with Ardence
Even limited to that brief description of Ardence’s technology you can see how it is useful when delivering desktops to users. There are two use cases:
- When used in concert with other VDI solutions, Ardence can facilitate “on demand” dynamic desktop images for backend VMware servers.
- Ardence can deliver local dynamic Windows XP desktops to actual full-blown desktop PC deices, combining the strengths of “traditional desktop PCs” with centrally managed SBC environments.
Let’s explore each use case.
Using Ardence to power VDI desktops
Imagine you have a huge VMware server that is ready to host Windows XP VMs for users. Since you don’t want to consume terabytes of SAN space storing unique disk images for each user, you’ll probably want to go the “dynamic desktop” VDI route. That means that you need each of your Windows XP VMs to boot from the same template disk image, and that’s exactly where Ardence fits in.
With Ardence, you configure your VMware VMs so that they PXE boot. When a new VM starts up, it PXE boots and contacts the Ardence server. The Ardence server looks at the ID of the booting VM and then mounts a vDisk for it. Of course this vDisk can be shared between hundreds or thousands of VMs.
Since the Ardence technology is based on a device driver running in the Windows XP VM, Ardence intercepts calls for things like the domain RID and computer name, and it automatically looks up the ID of the client computer in its own database and replaces the generic template computer name and RID with the real ones for that device.
So with Ardence, it’s possible to enable the concept of the dynamic VDI desktop.
Another interesting use of Ardence in VDI environments is that you can use it to stream the VMware host software that runs all of the Windows XP VMs. For instance, in many cases you will probably only be able to run 15-30 Windows XP VMs per physical server host. If you have several hundred VDI users, suddenly you’re talking about 20 or more VMware host servers. How are you managing those?
With Ardence, you configure your host servers to PXE boot a VMware image. Once the host is online, it can add itself to the pool and immediately begin hosting Windows XP VDI VMs for users (which will also be PXE booted from Ardence Disk Images). In some ways this is the “most” virtual environment, since you completely separate all physical hardware from OS execution.
(Note: You must be logged in to post a comment.)
If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.