Where is all this Virtualization going? Part Two (a year later)

It's been just over a year since I wrote this article about where I thought virtualization was going. The primary idea behind the article was that while a lot of exciting things were happening around server/ x86 virtualization, the improvements we were seeing were just that: improvements.

It's been just over a year since I wrote this article about where I thought virtualization was going. The primary idea behind the article was that while a lot of exciting things were happening around server/ x86 virtualization, the improvements we were seeing were just that: improvements. They were not major paradigm shifts in how we, as IT professionals and architects familiar with VMs, look at systems. I talked about how VMware and all VM platforms changed the way we thought about doing datacenter deployments, server implementations, etc.; but the improvements we were seeing at the time were not a huge change like the first time we actually “got” what VM's meant to IT as a whole.

With all that said, I suggested an idea about an embedded hypervisor shipping on laptops and desktops that could change the way we think about, manage and deploy desktops. This would also give some interesting (at least what I thought was interesting) use cases for security VMs and Internet browsing VMs, and how this could be a potential drastic change for IT and businesses.

I'm writing today's editorial as a follow up based on what was published a few days ago virtualization.info:

virtualization.info has learned that Phoenix is developing its own hypervisor, called HyperCore, designed to host traditional operating systems like Windows Vista, side by side with a special multi-purpose environment called HyperSpace, produced by Phoenix itself.

HyperCore is a true bare-metal Virtual Machine Monitor (VMM) which will load directly from Phoenix BIOS, while HyperSpace will be able to provide basic capabilities for daily tasks, like internet browsing and multimedia files view, in isolated virtual machines. The HyperSpace will also provide some security tools, like an anti-virus, to recover other compromised virtual machines.

HyperCore will also be able to run embedded OSes inside its virtual machines, developed by third party ISVs for different purposes.

While I initially wanted to call this “vindication,” it really is just the next step in virtualzation, and obviously people smarter than me are running with it. While some people argue about the value of going this direction and want to wave their hands about better-developed applications, better security tools, more elegant and more efficient ways of doing virtualization, the reality is this: it's not always the most elegant or technologically perfect solution that wins and takes hold in IT. I mean if the best technology always wins, would Windows have won the file server game versus NetWare?  Or would we see Windows application servers and NetWare file servers still today?

In technology, we have to deal with reality. The reality is that for ten years now (since the first time I really started playing with applications from a Citrix level), I kept hearing the phrase “once apps are developed better, and standards are followed” or “once everyone gets to 32-bit apps” etc. What we have to deal with is that applications are developed as fast as possible, as cheaply as possible, for as many people as possible. Of course this cheap development is done in a constantly changing landscape of changing OSes, OS upgrades and patches, new hardware, changes in supporting apps like the dreaded .NET framework and ODBC versions, not to mention the changes in apps themselves that seem to come month-after-month.

With all this change we are not going to get the application nirvana any point in the near future. That said, the idea of the compact embedded hypervisor in a desktop is a GREAT IDEA! The concept of having a separate VM to run non-trusted applications (graphic diagrams and details in the previous article) is great. Hell, some of us do it now. We run the VMware player (or VMware workstation) and have a VM for clients with their anti-virus, their build, etc. This is just taking it to the next level.

Let’s use one specific example here that has me excited. I have a six-year-old daughter. She LOVES the computer. She likes to go to webkinz, Disney, etc. I love that she loves the computer. What I hate is her using my computer and the need to install ActiveX controls, player software, etc.  I also worry about when she gets older having to keep her system up-to-date and secure.

What if she had one of these machines with the embedded hypervisor? Something with a separate security VM that I controlled, I updated, and I managed? Something she couldn’t get into? Heck, something I could even outsource to someone else and replace the native VM with a higher-end one for a few bucks that constantly updated automagically, and could do IDS, IPS, and even network blocking of malicious traffic in addition to the "traditional" security stuff on the desktop.

Let’s extend that idea to the enterprise. The ability configure security VMs, and outsource them if needed? Isolate the machine on the network while letting users have more freedom with their PC when they're gone from the office? All without “tattooing” the native Windows OS that we rely on for business apps. Or give users the ability to run a separate VM from home. Kids can mess that up, but the “work” VM that they hot-key to or that is automatically loaded when connected to the corporate network is only available for them and is the only OS that can conncet to the VPN to the corporate network?

I think the future of desktop virtualization hasn’t been written yet, but I know that these types of ideas will be a reality and are very exciting. This modularization and isolation of the things we do day today could solve a number of problems IT. I don’t think this is the end-all cure to our desktop woes, but I sure am excited about the tools that will be offered because of this. Will these ideas still be our reality in 2017? Who knows (probably not), but by then we’ll have a different set of issues to deal with. Or we may have applications that are all well written? Or I may have quantum computing laptops at my disposal with totally new apps and OSes, and all of this stuff will be laughed at kinda like when we look at the pictures of guys in top hats riding bicycles with a huge wheel up front and a roller skate wheel in the back.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I think this is all great and a leap in the right direction. However until the industry figures out licensing and mgmt of applications in this shift to user based computing away from machine centric computing, I don't see the economics working in the real world. Application Virtualization technologies, Softgrid etc are still maturing and have plenty of gaps that don't enable this model fully yet. I hope we get to a desktop world where users truely become decoupled from the OS and the App, and these are nothing more than services that are provisioned dynamically based on a users needs that is learned over time through usage patterns. Add some new efficient ways to manage all this mobility that is going to be enabled, and I think the picture is much more powerful than hypervisor, hyperspace, VMM or what ever they think of next. This could really empower businesses to me more agile and efficient with the resources they have. Great atricles on this site, and some really smart people. I love this stuff, and excited to be watching this evolution.
I've seen a lot of interest in the industry for Windows Server Virtualization for the desktop...meaning if MS could release it for Vista or the next Windows, it'll be a hit too. "Windows Virtualization (WV)"
Great article, Ron. I think the scenario you described will be the future. However, it is not only the software vendors, such as Microsoft and Citrix, who will play a significant role. Also think of the hardware and CPU manufacturers, such as Intel and AMD, who want to protect their business model. Look at Intel for instance. They already ship all their new CPUs with a technology called vPro, which includes on-the-chip virtualization technology called VT-x. It is possible to control kernel processes of virtualized guest operating through hypervisors that take advantage of vPro and VT-x.

Now compare this to graphics adapters 10 years ago. At this time the rendering of complex graphics elements was primarily done in software and the graphics hardware was only able to address individual pixels. A good example is Windows GDI function calls when rendering the Windows user interface. GDI can happen in software or in hardware, depending on the capabilities of the graphics adapter. Today all graphics adapters come with GDI accelerator chips that only cost a couple of cents. Even RDP takes advantage of GDI accelerators on the client side. The same thing will happen to virtualization -- VMware is still doing it in software, Microsoft Viridian and Xen are already using the VT-x hardware acceleration functionality. Yes, VMware is still performing better, but guess what will be the future. So I think even VMware will take advantage of hardware virtualization functionalities in the future. These new hypervisors can be built so small that they fit into a boot ROM. Look at VMware ESX Server 3i, which has a memory footprint small enough to be embedded into a ROM. The Microsoft Viridian architecture is designed to do the same. This also means that it doesn't even require an OS to make this kind of virtualization happen.

No coming to the next point Ron mentioned: system management. Intel vPro already includes something called Active Management Technology (AMT) which allows system management without the help of an operating system! I guess that AMD has something similar, but I'm not sure. By using these hardware-level programming interfaces, the upcoming management frameworks, such as Microsoft System Center, can remotely control the "raw iron" of virtualization hardware platforms including their embedded hypervisors. Xen includes tools that seem to go into the same direction. In the future it will be the management tools that matter, but not the pure hypervisor. Hypervisors will be a commodity, pretty much in the same way as standard graphics adapters are today.

So the technology to make Ron's ideas come true is already available, even it is not always visible if you don't know where to look for it. If Intel (and AMD) and Microsoft (and Citrix or VMware) combine their strengths with the PC manufacturers such as Dell, HP, Fujitsu Siemens or IBM, the story will be complete in a short time. It's only some new OEM agreements and you will be able to buy cheap PC hardware with embedded OEM'ed hypervisors (from Microsoft, XenSource or VMware) and one license for a guest operating system of your choice (guess what this will be). Not that much of a change if you compare it with today's business models of the big manufacturers. The only difference is that VMware and Citrix now also belong to this group of companies who define the computing platforms we can buy. From this perspective spending $500m for hypervisor management tools makes a lot of sense!

And, as a final remark, you can be sure that these big market players will find OS and app licensing models that are compatible to the new virtualization paradigm. They will change licensing when the time is right for them, always having shareholders' value in mind.

Spon on, Ron.

I have been expecting (but have not found) the re-birth of small OS's (such as used to be used in many small communication devices before linux hit the scene) built with a single application. Such an OS is not multi-purpose in that it does not have an OS interface that allows users to extend it. Just enough smarts to handle CPUs, Memory, and IO. This makes it really small. Now a smart player can develop a very small VM that you download that is their app. I'm guessing it will be the kind of app that your Daughter will want when she is maybe 10-16. Everything is pre-configured which makes it so easy for the user.

There are business apps that could use this model in addition to consumer, but I would expect the consumer version first. Unless you follow the adage that sex sells, in which case I guess it will be porn!

When desktops are shipping with a hypervisor built in, maybe we will see this too.
Ron, thanks for the heads-up regarding Phoenix. And Benny added an excellent viewpoint addition.
So now, we see people looking at all the layers, including BIOS.
It does tend to make you wonder about the other glue and components. Would there be use for virtualization onboard other components? Would a fixed disk benefit from decoupled control? (Yes, I said fixed disk, and I do know about SANs)
Virtualization and VMs are fine, but we're still kind of stuck on the OS=BOX equation.
One common thread that I'm glad to see is the recognition of need for management. It's wonderful that we can provision working environments at a mind-boggling pace, but those environments are not static, either. Tell me, a year from now, are you sure that image/VM doesn't have IE7 on it? This technology amplifies and accelerates the old aphorism of 'To err is human; it takes a computer to really foul (sic) things up.' I think that a management system that will work cross-boundry could be killer. Something that will recognize disparate VM types and layer entities, document them, and manipulate them. And if we could get some real metrics for VMs along the way...
/me runs off to patent HyperPr0n (tm)
So while this note just focused on a concept that we spoke about a year ago here, there are other cool things going on (maybe I should do a part 3). The cool thing about this concept is that I really beleive it will have/cause a dramatic shift in how the Desktop we see today is managed and deployed. That is what is exciting to me. I could care less who makes the hypervisor... or where it is buried. What I like is the concept.

Oh, and benny always has good comments... Trust him, he's a doctor. :-)
Great comments as always, but I'd have to disagree with comment that VMWare performs better. Yes, ESX does better than XenServer when it comes to networking, but thats about it. XenServer is much more effecient in CPU processing and vCPU scaling. With ESX, your still stuck with an emulation layer as well as a crappy vCPU scheduler. Add a second vCPU to a VM and performance gets worse! Imagine that...
hmmm. I love the comments made by Guests that state 'facts'. There are PLENTY of disagreements on what perfroms better than what. In the Vmware and Xen argument I would say processor is extremely close. They do handle processor differently but all in all the results are very close (Xen winning some functions, VMware ESX winning others). On the networking and disk ESX is better currently, but Xen will get there shortly.

Anyway, I love how the comments section of an article sometimes turn into a religous battle. Hold on. let me get a cup o coffeee and watch the fight :-)
No doubt. Licensing is always going to the problem. Microsoft et al are ALWAYS going to move slow on changing/adapting licensing models. Why? some folks say its to maximize the financial side. I think it is some of that but has more (or equal loading) to do with legal implications. Meaning if you have a EULA or any type of license agreement, lawyers will review it. And getting a lawyer (or team of lawyers) to say ANYTHING is ok takes months and years in big organizations...

We as technology people have to deal with this, but hopefully can drive technology as needed, and deal with licensing as we go. because the day they get licensing right for this technology, the next one is already on our plate and they are already behind again. (kinda like goverment :-) )
I am really interested to see over time, if DDI/VDI etc will actually result in equal or better costs to standard fat desktops and CPS or TS. If not, this could all just be a lot of smoke. As per the first guest poster on this thread, I agree the possibility of enabling a more dynamic business is a great offset against some of the capital costs. However, I do worry that things like datacenter occupancy cost may make this model very unattractive for many average firms. For these typs of firms the TS/CPS model still seems very attractive. So the Citrix message I heard at iForum about having more than just one model for DDI vs a VDI model only, resonates with me as I believe this meets more use cases than pure VMWare VDI.
HUH? This here wouldnt fit into the traditional "DDI or VDI" model. The piece on the datacenter and facilties.. well seeing how this is a hypervisor for laptop/desktop... How is that effecting the Datacenter?
I think that within twenty years even your pocket pen will have the equivalent of a Duo Core 2 and so surrounded an ocean of computing, the notion of a PC becomes meaningless. Software and data will run on a virtual platform, the economics insist this will happen. I blogged about that last year: http://stateless.geek.nz/2006/09/04/the-future-is-an-appliance/

Defining the virtual platform is important as it will be a critical gatekeeper.

When not just your phone can run software, but your also sofa; and when that sofa processing time becomes rentable to, say, a visitor: it creates a whole new market force.
Not sure the notion of a PC becomes meaningless at that point. THe power of the processor and memory is really not our current limitation.

RIght now the limitation is how we interact with the Device. There is plenty of power and technologies today to do the work that MOST people do on something very small. The thing today (from a desktop or laptop perspective) is how we manipulate the device (Ie the input). Meaning (assuming the biggest input device) the Keyboard. As much as we like our pocket PCs they are that. I would not want to sit down and write this article on my pocket pc or blackberry, and seeing how our hands aren't going to get any smaller, its down to other interaction methods, voice recognition would be the first thought, but they are clunky at best for the mass market. Anyway, PCs and laptops are the size they will be. People need to type, and see what they type. IMages and video are more available, higher res, and demand bigger screens for full value.

Boy, anybody remember the HP Jornada... Those things were cool, but only till I could get back to my laptop.
And let's not forget the Psion? ;-)

Ron, you are spot on here, we may be at the "dawning of a new age" with regards to Virtualization and we are all getting pretty excited with what it might enable or allow in the way of allowing IT to be agile - but we are still tied to keyboards at the moment. I can't wait till we get some serious breakthrough on the front.
Cost of storing desktops in central datacenters for ease of management etc.
I meaning is the PC as a platform rather than an interface. In fact hardware in general as a platform. Which is why I think Data will be the new platform.

How we interact with that data is obviously a important consideration. Whether it be a blackberry keyboard, tablet pen or desktop keyboard I don't think it matters once we reach a certain level of computing power. Economics will drive the innovation cycle.

Consider if a portable rubber keyboard has ten times more CPU power than the current computer system you are using. You can plug it into a LCD monitor with a standard USB connection. How would situation define the environment in which applications are developed and run.

I can now get 16 cores into a 1U system for hardly any money, using a 2 node Supermicro system. Ignoring CPU improvements that is a four factor increase over 2 years ago. When 8 core chips come out it scales again. Virtualisation is what makes the economics of this increased density useful. Economics defined not only by useful processing power (TS max user limits per instance?) but also manageability.

I could go on.
Some years ago some friends and I talked about "better" (maybe simply "other") kinds of computers. These computers had separate subsystems for data storage, display + human interface devices and different "application systems", that are independent from each others and can be maintained, updated or _located_ individually. On "located" I'd like to refer to later. Our visions base on hardware subsystems, but this kind of modularization is ideal for virtual machines...

Imagine a subsystem for the user interface:
It takes care of displaying the windows and their automatic updates or redrawing when they're dragged around or put to the background or are minimized. It sends the usual "redraw"-event to the application only if it is resized, or if it wasn't displayed recently and it's cache is overwritten.
It handles different kinds of window contents (e.g. text consoles, graphical windows, video displays, 3D content, remote applications) as well as the window style, meaning the optics for frames, functional elements, fonts, etc.
It is responsible for showing it on any number and kind of displays attached to the system (screens, TV sets, beamers, status displays, remote controls with reverse channel, network attached / wireless display devices, holographic devices (when invented :o) ...). And as this subsystem is destined to render graphics, it even can be used for printing purposes.
Such a subsystem could run on dedicated hardware, such as a graphics card, and get a lot of load off the main processor(s) as the OS's don't need to care that much on display management. This subsystem can be simple (for cheap machines, business computers, hand-held's, thin clients) or advanced (for gamers' PC's, CAD or graphics workstations, high-end-systems), with or without OS independent transparency effects, 3D-window-flipping, virtual desktop management or other features.
This subsystem also should be able to acquire images from photo or video cameras or scanners and do a (simple) OCR or perform copy functions with a printer. Without necessarily needing an application! Audio connections should be located here, too. Even the keyboard(s) and pointing device(s) are controlled here. The input focus is managed by the user interface and it only reports it to the application subsystem for process priority purposes. Imagine two sets of keyboards/mice and two people working on the same large screen with different applications or even within the same window!

Then there is a subsystem for mass storage devices that cares about anything having to do with data storage.
It provides a UNIX-like root file system where any data storage or part of it is plugged into. It takes care that any device like a hard disk, an USB stick, optical disk, floppy disk or other removable device, RAM-disk, virtual device or a network share is correctly connected. The storage subsystem is able to assign storage devices or parts of them (even folders) to directory branches or (if you need them) drive letters, manages "layers" of different storage devices using the same directory branch with read-/write priorities (e.g. a directory that uses a DVD as read device, a hard disk folder for permanent changes and priority reads and a RAM drive for fast/temporary changes) and access rights... Anything is possible.
This subsystem could automate scanning for viruses when writing files and sign them as "clean with {scan-engine} at {date/time}", perform automated scans at certain idle situations, scans again when changing the file type – and when reading or re-scanning, it only needs to check it against the newer virus patterns...
You can build in different file system functions for transaction tracking, version checking/managing, automated backups for opened files, clustering, encryption, journaling, at this point - if you need them.

The third "part" is a set of multiple flexible subsystems for operating applications. They don't necessarily need to run a "full" Windows or MacOS or Linux (or OS/2
, as it was current that time), the functions for the other subsystems could be stripped off them. Even specialized operating systems (PalmOS, Symbian, Psion, BeOS, AmigaOS, Atari- and C64-emulators or others) are possible, like somebody wrote earlier in this thread!
We thought of different application systems to be able to use different hardware (as the PowerPC or Alpha processors where current alternatives when we talked about this) - this isn't very important anymore, but more interesting for parallel processing.
And the ability to create special-purpose OS's was very interesting to us, as it allows slim systems for special tasks. A game-OS would have features for fast 3D-displays and graphics, use of game controllers attached to your user interface subsystem and so on. A business OS would be more interested in security and safety functions and to control the user's access rights at different locations... And it allows the execution of all kinds of processes on a standardized user interface subsystem.

What wasn't interesting those days was a security subsystem, but it is very important now. This architecture would be ideal for adding such new subsystems or "plug-in-subsystems"! Add a plug-in that controls the access rights to different parts of your system to make it safe for the use with different persons (your daughter), add one for parental protection to let your daughter surf the web, add an authentication subsystem for finger print scanners, RSA-tokens and so on... Add plug-ins to the mass storage system for clustering, journaling (as described above)...

What I meant with "located", is that the interfaces between these subsystems can be location independent. You have display and basic mass storage subsystems built into the thin-client on your desktop, and connect to application subsystems on a machine next door or anywhere in the world (the neighbor’s file server, a Citrix server at your company, a software vendor's application server ...). Then attach your mass storage subsystem a public internet-storage and backup all your private files to it. Connect your company's application subsystem to the company's mass storage subsystem and work with your company's data and applications anywhere in the world while your company's admin adjusts the rights you have at different locations. Then connect your display subsystem to the one in your notebook to use both screens with both machines...

Nice dream, but the cheapest things will win... ;o)