I sat down with a client the other day and was asked some pretty simple questions that couldn’t easily be answered. Those questions were, “So where is virtualization going anyway? Not in the next 12 months, but in the next 2-3 years? What should we know about virtualization that’s going to mean big changes like the first implementation we put in of our virtualization platform?”
This drove an interesting discussion with some follow up on my part in both research and some long thought over numerous cold beers and a whiteboard. Once I figured out where I thought it was going, I started putting my ideas into Word and figured more people than just me would be interested in where this is all going and why it’s important. Of course I have no crystal ball--this is just one guy’s opinion of where things are and where they will be going. (Feel free to comment with your thoughts.)
Right now you can talk to any company paying lip service virtualization or that considers themselves virtualization experts and they will begin to talk about the “future” of virtualization. Inevitably they all say some (or all) of the following:
- Virtualization will expand beyond just servers and into the desktop realm. (Mostly people are talking about VDI here, and of course about five different packages for VDI have been released this year alone.)
- Virtualization will take the next step into the enterprise expanding beyond server consolidation.
- Virtualization technology’s next step is to get better performance from VMs to assist the move deeper into the enterprise.
- Virtualization may act as a conduit for software vendors to deliver application appliances to you preconfigured and ready to go, You just modify some settings for your environment and you are up and running.
- Built-in virtualization technology at the hardware level will take it to the next step. (Here we’re talking about processors that are moving system calls for VMs out of Ring 0 and into “Ring 1,” or Network adapters and HBAs that have built-in hardware hypervisors to allow for better control and resource allocation at that level.
Anyway, if you read this the way I do, I don’t see anything huge here. I mean when ESX really started making an impact in IT a couple of years ago it did so because it changed the landscape of IT infrastructure as we know it. It changed the way servers were provisioned and spec’d out, how they were purchased, and the number purchased. It changed every calculation we had been making from the cost of the servers to amount of HVAC, floor space, UPS capacity, power, network requirements, and storage we needed. It changed the mindset of architects and engineers when designing solutions and gave them options they never dreamed of. It allowed the businesses to drive down costs while still providing the same levels of service on a lot of applications all the while decreasing recovery time for individual applications both locally and during DR.
Do any of the “future of virtualization” statements above come anywhere near to having this kind of impact? Or does it really just extend the existing benefits a little bit without changing our mindset and the way we’re already doing things with our virtualization technology of choice?
In a drive-by world where everything is “what can you do for me today,” most people aren’t looking at the reality of this technology’s future. The reality is that sooner or later it’s going to be a commodity just like hardware. Face it, virtualization is just another way to supply hardware to an operating system and its applications. The future of virtualization isn’t in the stuff listed above. These items (for sure the hardware) will have an impact on how virtualization is done, but software that virtualizes hardware (in a couple of years) will be a commodity just like the hardware it runs on. When you select that software you will do so just like you purchase hardware. Right now I believe that the real race going on in the virtualization space isn’t about who can Vmotion or support four processor VMs, etc. The real race is about who has the first lightweight fully integrated hypervisor that is OEM’ed on servers and desktops. (That’s right, I said “desktops,” but we’ll get back to that in a minute)
When people talk about a hardware hypervisor (Microsoft, a hardware vendor, VMware, anyone), most people listening assume this is built right into the hardware and then you will install the OS’es right into this. Let’s be realistic. Even when it’s “built right into the hardware,” it’s still software. The line between hardware and software has blurred over the years with on-board BIOS’s for hardware components that are fully configurable and upgradeable. The only thing keeping them from being their own OS is that they can’t manage other hardware components and you can’t install an app to it.
Now let’s picture a world where you buy an HP server, with OEM’ed Qlogic HBAs, Intel or OEM’ed Intel nics, memory from HP or Crucial, or (insert favorite vendor here), processors from AMD or Intel, drives from... you get the picture. Even if the processors, NICs, and HBAs all had hardware hypervisors built-in to them, what controls those hypervisors and connects VMs to them? What connects a virtual machine to each of these components? Right now the closest thing to bare metal is ESX. And improvements to the hardware will make ESX’s performance even closer to raw hardware performance. But what is the future? The Future is a thin layer that is OEM’ed that can work with and control all these devices. It will not be as bulky as any Windows or Linux OS you have ever seen and will more closely resemble a glorified piece of firmware that boots and starts dividing up resources to whatever number of VMs you have running on the machine. Of course it will still have some type of interface while the server and its VMs are running, but it will be extremely lightweight and self-sustaining. This will come with every x86 server and desktop. What you will buy is not the hypervisor but the management tools that wrap around it. That is the key and this is where we bleed into desktops a bit.
Right now on a desktop you have a single OS. (Let’s not talk VMware workstation of Virtual PC for this article) with applications installed, Anti-Virus, Spy/Malware detection and removal software, an Internet browser, etc. What if your machine didn’t look like this? What if your machine looked like the image below?
In this example all four VMs would load at boot time. The network and security VM would be the first one up after the hypervisor loads. This VM will scan all traffic in and out almost like a network virus wall or a SMTP gateway server. It could be a one-stop shop for traffic scanning, IDS, IPS and even traffic scanning between VMs. This security VM may even be a VM that comes OEM’ed along with the hypervisor.
Next is your trusted desktop. This is where you work normally on your corporate LAN and interacted with trusted machines/networks. What machines and networks are trusted is configured in the security VM and is possibly centrally controlled by an administrator. The cool thing is that if this is a personal laptop or computer you may be able “outsource” your security by buying a subscription to “Symantec’s Security VM Package.” (I know, it’s a lame name, but it’s all I could come up with.) Anyway, the package may be purchased and delivered to the user to replace an existing one shipped by the hardware vendor (back to application distribution by VM appliance). They can then offer you a service where they update the VM (like getting new virus definitions) but in this case it’s virus, malware, spyware, firewall, etc., and it’s not tattooed in the trusted OS and can be hardened in ways that the trusted OS can’t while still remaining functional.
The trusted desktop is now behind a line of security that will protect it from the outside and from other VMs. Then you have a default non-trusted application VM. Maybe this VM runs applications like an Internet browser, media player, etc. This VM (or more likely its applications) is invoked whenever the user makes a call outside its trusted area or uses an application specifically configured for high security. This application is then presented to the trusted desktop (kind of like an ICA seamless window) but is actually running in another VM. The non-trusted application VM might not even have an entire OS like we know it. Instead this VM may be another VM Appliance that has a small OS that loads just enough to support that browser app and a few multi-media type apps and presents the screen (like ICA or RDP) into a window in the Trusted VM.
Finally, you’ll have the ability to add other VMs as needed. Maybe you’re contractor like I am and need a specific build with a specific virus package and hotfixes to connect to a client site. This hypervisor’s configuration would allow for that to be added, and may even have a rule base setup in the hypervisor that says, “When Contract A VM is turned on the trusted VM isn’t able to connect to the network.” Then when I am at the client site I invoke their VM (maybe by booting it, or maybe by simply using a hotkey sequence almost like an Alt-Tab) and I am ready. Their security team could even have these VMs preconfigured for delivery to contractors or even new employees. (There I go again with Virtual Appliances, only in this case it’s a home grown one.)
As you can see the concept laid out above would solve a number of issues and change the way we do desktops. It would increase security, allow for isolated work environments that are independent of each other and allow for minute control of the desktops at a level we don’t have now. In addition it would allow vendors numerous options, like delivering security packages in whole to a desktop and not worrying about “what rev of windows do they have, this doesn’t run on XP SP2” yet, etc. Hell the firewall VM may even be able to allow you to shut down network connectivity from that machine after so many days without an update. Wouldn’t that be cool!
Anyway, with the impact this could have on the desktop and the complexity that it would introduce from a management standpoint, I think you can see where I am going by now. Imagine the following environment at the server level:
- A security VM independent of the hypervisor that can does IDS and IPS for all the VMs on the host along with acting as a network virus wall.
- A configuration VM that manages the hypervisor on that box that is independent of the security VM but relies on it.
- A VM appliance supplied by your backup vendor that you put on each server that acts as a backup job manager for each VM This would manage backup resources of VMs on that host ensuring that not all the bandwidth or processing resources were being hogged by doing a large number of VMs at once. It just detects the VMs running right now on that host and (based on configurations from the centralized management server) schedules and moves jobs accordingly.
- A central management tool to handle the configuration of the hardware hypervisor, the management and interaction with the hardware, and the startup sequence of the appliance VMs that will support the application VMs you’ve installed.
Really, the numbers of things that can be done on the server side are endless, and I’ve just touched on a few. But the bottom line is that in the future the hypervisor will be a commodity. The future of virtualization is a lightweight hypervisor that will manage the hardware and its extended capabilities being built into NICs, HBAs, disks, processors, etc., to support numerous OSes/VMs on a single server. No hardware fix will show up to make Intel, Broadcom, AMD, QLogic, Emulex, all be managed the same way by some piece of firmware on the motherboard. Nope. It will be software and it will be installed on every server that roles out of HP or Dell or IBM. You may still choose to use only one OS but you don’t have to.
Now you may ask, “Who will provide it?” Who knows? My bet is VMware gets there first. They just have too much of a lead in second quarter of the game. Whoever does it will more than likely provide the hypervisor for little cost to the OEMs with each server coming with a limited single host management tool. Then you would be able to purchase the additional tools to provide management to large numbers of hosts, their VMs, and the appliances being installed. This of course is all in addition to the things that tools like VirtualCenter already do (Vmotion, resource balancing, VM provisioning, automated host recovery, etc.).
My guess is you will pick the hypervisor like you pick hardware vendors. You go into a shop with all HP servers and they basically run HP because they like the support, the tools available, the response they get from sales etc. You could say the same thing for an IBM or Dell shop. You will find the vendor that fits your needs and then buy in bulk. But the key will be the tools and mgmt options. You don’t find organizations buy 2 Dells this week, 4 HPs next week, 3 IBMs the following week. They purchase a standard and stick with it for they have found the vendor of a commodity that fills their needs. (I mean really is there that much difference between a dual proc box from any of the vendors?) The reasons for purchasing hardware will be stated for the hypervisor use in the future. It’s just going to be an extension of the server. The other things are what’s really important.