Where is all this virtualization going? - Ron Oglesby - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.

Where is all this virtualization going?

Written on Aug 27 2006 23,724 views, 51 comments


by Ron Oglesby

I sat down with a client the other day and was asked some pretty simple questions that couldn’t easily be answered. Those questions were, “So where is virtualization going anyway? Not in the next 12 months, but in the next 2-3 years? What should we know about virtualization that’s going to mean big changes like the first implementation we put in of our virtualization platform?” 

This drove an interesting discussion with some follow up on my part in both research and some long thought over numerous cold beers and a whiteboard. Once I figured out where I thought it was going, I started putting my ideas into Word and figured more people than just me would be interested in where this is all going and why it’s important. Of course I have no crystal ball--this is just one guy’s opinion of where things are and where they will be going. (Feel free to comment with your thoughts.)

Right now you can talk to any company paying lip service virtualization or that considers themselves virtualization experts and they will begin to talk about the “future” of virtualization.  Inevitably they all say some (or all) of the following:

  • Virtualization will expand beyond just servers and into the desktop realm. (Mostly people are talking about VDI here, and of course about five different packages for VDI have been released this year alone.)
  • Virtualization will take the next step into the enterprise expanding beyond server consolidation.
  • Virtualization technology’s next step is to get better performance from VMs to assist the move deeper into the enterprise.
  • Virtualization may act as a conduit for software vendors to deliver application appliances to you preconfigured and ready to go, You just modify some settings for your environment and you are up and running.
  • Built-in virtualization technology at the hardware level will take it to the next step. (Here we’re talking about processors that are moving system calls for VMs out of Ring 0 and into “Ring 1,” or Network adapters and HBAs that have built-in hardware hypervisors to allow for better control and resource allocation at that level.

Anyway, if you read this the way I do, I don’t see anything huge here. I mean when ESX really started making an impact in IT a couple of years ago it did so because it changed the landscape of IT infrastructure as we know it. It changed the way servers were provisioned and spec’d out, how they were purchased, and the number purchased. It changed every calculation we had been making from the cost of the servers to amount of HVAC, floor space, UPS capacity, power, network requirements, and storage we needed. It changed the mindset of architects and engineers when designing solutions and gave them options they never dreamed of. It allowed the businesses to drive down costs while still providing the same levels of service on a lot of applications all the while decreasing recovery time for individual applications both locally and during DR.

Do any of the “future of virtualization” statements above come anywhere near to having this kind of impact? Or does it really just extend the existing benefits a little bit without changing our mindset and the way we’re already doing things with our virtualization technology of choice?
In a drive-by world where everything is “what can you do for me today,” most people aren’t looking at the reality of this technology’s future. The reality is that sooner or later it’s going to be a commodity just like hardware. Face it, virtualization is just another way to supply hardware to an operating system and its applications. The future of virtualization isn’t in the stuff listed above. These items (for sure the hardware) will have an impact on how virtualization is done, but software that virtualizes hardware (in a couple of years) will be a commodity just like the hardware it runs on. When you select that software you will do so just like you purchase hardware. Right now I believe that the real race going on in the virtualization space isn’t about who can Vmotion or support four processor VMs, etc. The real race is about who has the first lightweight fully integrated hypervisor that is OEM’ed on servers and desktops. (That’s right, I said “desktops,” but we’ll get back to that in a minute)

When people talk about a hardware hypervisor (Microsoft, a hardware vendor, VMware, anyone), most people listening assume this is built right into the hardware and then you will install the OS’es right into this. Let’s be realistic. Even when it’s “built right into the hardware,” it’s still software. The line between hardware and software has blurred over the years with on-board BIOS’s for hardware components that are fully configurable and upgradeable. The only thing keeping them from being their own OS is that they can’t manage other hardware components and you can’t install an app to it.

Now let’s picture a world where you buy an HP server, with OEM’ed Qlogic HBAs, Intel or OEM’ed Intel nics, memory from HP or Crucial, or (insert favorite vendor here), processors from AMD or Intel, drives from... you get the picture. Even if the processors, NICs, and HBAs all had hardware hypervisors built-in to them, what controls those hypervisors and connects VMs to them? What connects a virtual machine to each of these components? Right now the closest thing to bare metal is ESX. And improvements to the hardware will make ESX’s performance even closer to raw hardware performance. But what is the future? The Future is a thin layer that is OEM’ed that can work with and control all these devices. It will not be as bulky as any Windows or Linux OS you have ever seen and will more closely resemble a glorified piece of firmware that boots and starts dividing up resources to whatever number of VMs you have running on the machine. Of course it will still have some type of interface while the server and its VMs are running, but it will be extremely lightweight and self-sustaining. This will come with every x86 server and desktop. What you will buy is not the hypervisor but the management tools that wrap around it. That is the key and this is where we bleed into desktops a bit.

Right now on a desktop you have a single OS. (Let’s not talk VMware workstation of Virtual PC for this article) with applications installed, Anti-Virus, Spy/Malware detection and removal software, an Internet browser, etc. What if your machine didn’t look like this? What if your machine looked like the image below?

In this example all four VMs would load at boot time. The network and security VM would be the first one up after the hypervisor loads. This VM will scan all traffic in and out almost like a network virus wall or a SMTP gateway server. It could be a one-stop shop for traffic scanning, IDS, IPS and even traffic scanning between VMs. This security VM may even be a VM that comes OEM’ed along with the hypervisor.

Next is your trusted desktop. This is where you work normally on your corporate LAN and interacted with trusted machines/networks. What machines and networks are trusted is configured in the security VM and is possibly centrally controlled by an administrator. The cool thing is that if this is a personal laptop or computer you may be able “outsource” your security by buying a subscription to “Symantec’s Security VM Package.” (I know, it’s a lame name, but it’s all I could come up with.) Anyway, the package may be purchased and delivered to the user to replace an existing one shipped by the hardware vendor (back to application distribution by VM appliance). They can then offer you a service where they update the VM (like getting new virus definitions) but in this case it’s virus, malware, spyware, firewall, etc., and it’s not tattooed in the trusted OS and can be hardened in ways that the trusted OS can’t while still remaining functional.

The trusted desktop is now behind a line of security that will protect it from the outside and from other VMs. Then you have a default non-trusted application VM. Maybe this VM runs applications like an Internet browser, media player, etc. This VM (or more likely its applications) is invoked whenever the user makes a call outside its trusted area or uses an application specifically configured for high security. This application is then presented to the trusted desktop (kind of like an ICA seamless window) but is actually running in another VM. The non-trusted application VM might not even have an entire OS like we know it. Instead this VM may be another VM Appliance that has a small OS that loads just enough to support that browser app and a few multi-media type apps and presents the screen (like ICA or RDP) into a window in the Trusted VM.

Finally, you’ll have the ability to add other VMs as needed. Maybe you’re contractor like I am and need a specific build with a specific virus package and hotfixes to connect to a client site. This hypervisor’s configuration would allow for that to be added, and may even have a rule base setup in the hypervisor that says, “When Contract A VM is turned on the trusted VM isn’t able to connect to the network.” Then when I am at the client site I invoke their VM (maybe by booting it, or maybe by simply using a hotkey sequence almost like an Alt-Tab) and I am ready. Their security team could even have these VMs preconfigured for delivery to contractors or even new employees. (There I go again with Virtual Appliances, only in this case it’s a home grown one.)

As you can see the concept laid out above would solve a number of issues and change the way we do desktops. It would increase security, allow for isolated work environments that are independent of each other and allow for minute control of the desktops at a level we don’t have now. In addition it would allow vendors numerous options, like delivering security packages in whole to a desktop and not worrying about “what rev of windows do they have, this doesn’t run on XP SP2” yet, etc. Hell the firewall VM may even be able to allow you to shut down network connectivity from that machine after so many days without an update. Wouldn’t that be cool!

Anyway, with the impact this could have on the desktop and the complexity that it would introduce from a management standpoint, I think you can see where I am going by now. Imagine the following environment at the server level:

  • A security VM independent of the hypervisor that can does IDS and IPS for all the VMs on the host along with acting as a network virus wall.
  • A configuration VM that manages the hypervisor on that box that is independent of the security VM but relies on it.
  • A VM appliance supplied by your backup vendor that you put on each server that acts as a backup job manager for each VM This would manage backup resources of VMs on that host ensuring that not all the bandwidth or processing resources were being hogged by doing a large number of VMs at once. It just detects the VMs running right now on that host and (based on configurations from the centralized management server) schedules and moves jobs accordingly.
  • A central management tool to handle the configuration of the hardware hypervisor, the management and interaction with the hardware, and the startup sequence of the appliance VMs that will support the application VMs you’ve installed.

Really, the numbers of things that can be done on the server side are endless, and I’ve just touched on a few. But the bottom line is that in the future the hypervisor will be a commodity. The future of virtualization is a lightweight hypervisor that will manage the hardware and its extended capabilities being built into NICs, HBAs, disks, processors, etc., to support numerous OSes/VMs on a single server. No hardware fix will show up to make Intel, Broadcom, AMD, QLogic, Emulex, all be managed the same way by some piece of firmware on the motherboard. Nope. It will be software and it will be installed on every server that roles out of HP or Dell or IBM. You may still choose to use only one OS but you don’t have to.

Now you may ask, “Who will provide it?” Who knows? My bet is VMware gets there first. They just have too much of a lead in second quarter of the game. Whoever does it will more than likely provide the hypervisor for little cost to the OEMs with each server coming with a limited single host management tool. Then you would be able to purchase the additional tools to provide management to large numbers of hosts, their VMs, and the appliances being installed. This of course is all in addition to the things that tools like VirtualCenter already do (Vmotion, resource balancing, VM provisioning, automated host recovery, etc.).

My guess is you will pick the hypervisor like you pick hardware vendors. You go into a shop with all HP servers and they basically run HP because they like the support, the tools available, the response they get from sales etc. You could say the same thing for an IBM or Dell shop. You will find the vendor that fits your needs and then buy in bulk. But the key will be the tools and mgmt options. You don’t find organizations buy 2 Dells this week, 4 HPs next week, 3 IBMs the following week. They purchase a standard and stick with it for they have found the vendor of a commodity that fills their needs. (I mean really is there that much difference between a dual proc box from any of the vendors?) The reasons for purchasing hardware will be stated for the hypervisor use in the future. It’s just going to be an extension of the server. The other things are what’s really important.

 
 




Our Books


Comments

Roy wrote Very interesting!
on Mon, Aug 28 2006 3:54 AM Link To This Comment
I agree to the 'hypervisor vision' as described in the article. Server virtualization is nowadays becoming more and more common. The VDI concept is also becoming popular. But it still lacks the hypervision ability.I work with virtualization products for 5 years now and in my opinion the only two products that had impact were ESX and Softricity.The next thing will sure be something like the hypervisor...
Guest wrote hardware virtualization
on Mon, Aug 28 2006 3:59 AM Link To This Comment
Hi Ron, nice article.
I believe we may also see a move in the coming years towards virtualization at a hardware level.
With technologies like InfiniBand, basically the PCI bus gets on the network allowing to create a pool of resources.
On top of that some technologies are already there to virtualize this pool. So in a way, you're not thinking anymore about an hypervisor built on hardware and failover between hypervisors, but what you get here is 1 hypervisor built accross mutliple hardware boxes.
(think about the takeover of Topspin by Cisco).
The next level of the hardware blades, where you can ust add a CPU or memory blade for example (like Egenera), completes the picture.
Of course, today these Grid concepcts are targetted at the larger data centers, requiring high computing resources, but I see no reason why this wouldn't become a reality for us.
 
So there, just some thoughts I wanted to share....
 
Mike.
 
Guest wrote Grand scheme
on Mon, Aug 28 2006 6:12 AM Link To This Comment
To comment on your "leader" in this all. I do not think VMWare will be the first to accomplish this all. I reckon VMWare will be bought by one of the hardware vendors. Most likely HP. But that's just my hunch.
I can see them integrate this virtualization bit in their management tools basically creating a computing pool (grid computing anyone?).
Kinda like the Isilon(.com) storage solution. Plug in whatever you need and we'll use it.
 
Hardware will not be specific anymore. It will just be raw computing power which can be harnassed in any way through virtualisation. Though the term virtualisation is being abused by so many vendors it is loosing it's credibility.
 
Now combine raw computing power with apps that do not need an OS. Like basically anything packed with Softgrid. As gartner pointed out MS has prolly released it's last Windows OS with Vista. They'll focus on the rest. They are going down the net road with their live products. But my guess is the virtualized road will be even more lucrative for them.
I also hope they realize their OS is going to be worthless in a couple of years.
 
Anyways... nice article :)

Guest wrote RE: Grand scheme
on Mon, Aug 28 2006 8:54 AM Link To This Comment
partly agree. But I don't see HP buying Vmware, as it has already been bought by a hardware vendor :)
(EMC)
Brian Madden wrote RE: hardware virtualization
on Mon, Aug 28 2006 12:24 PM Link To This Comment
The Inquirer had a great series of articles a few weeks ago about Intel and hardware virtualization. These articles talk about the future of Intel's "VT" technology.. roadmaps to VT2 and VT3, and how they'll start virtualizing memory access at the hardware level and stuff.

Definitely worth a read for those interested in this stuff..

http://www.theinquirer.net/default.aspx?article=25576

Brian
Guest wrote RE: Grand scheme
on Mon, Aug 28 2006 5:46 PM Link To This Comment
Apps that do not need an OS... Hmmm. So in your opinion, SoftGrid apps do not need an OS? And what the heck does "prolly" mean???????????
Jeff Pitsch wrote RE: Grand scheme
on Mon, Aug 28 2006 8:14 PM Link To This Comment
Every since browsers were created people have been predicting the end of the OS.  I predict the end of the OS when we get our paperless offices. 
Guest wrote Softricity
on Mon, Aug 28 2006 8:25 PM Link To This Comment
Great article. I am sure VMWare will be the first to deliver the Hypervisor. The rest will be delivered by Microsoft (Softricity). I do not see the need for all those complete VM's. Do you mean a complete desktop or just some kind of virtual environment for a specific app?
 
Its all about applications. Once Softricity/MS delivers the CONTEXT (hopefully in the next release) under control of a fancy policy engine we can support all those scenarios that you describe. Why do I need a complete VM to be able to safely use my Internet Explorer? Seems like a hell of a lot of overhead for just one app. Just run that app in its own SystemGuard Environment, based on a policy rule.
 
What we want to achieve is bringing down the administrative boundary that the OS provides us today. ESX sofar hasn't addressed this. In fact in practice it only leads to an increase in OS images and hence the administrative burdon to manage them all.
 
What I would like to see is a hypervisor with on top of it a SystemGuard holding a set of Windows Services, another with network components, yet another with my Printer Drivers, another with my f$%#%# Oracle Client and on top of that my applications all nicely boxed in. Sort of buidling blocks. And of course I want then a building block manager that manages dependencies between them.
 
Communication between those blocks is being managed by a policy engine that can also define allowed traffic. Heck traffic between blocks can even be scanned for virusses and spyware.
 
Just my thoughts...
 
Leo van der Mee.
Guest wrote The Vision
on Mon, Aug 28 2006 9:33 PM Link To This Comment
Sorry but i think it's a bit rich to say this 'will' happen. i think there's a good chance that we're all in for a rude shock in 3 years time when other solutions competitive to virutalisation at the OS level take hold and rumble straight past anything other than the server-side of virtualisation. Maybe that's a few years later or never, who knows.
 
The point is, let's just remember for a moment that virtualisation as it stands at the moment is a messy-but-effective workaround for a problem caused by bad software design at the OS, service and application level. This is the reason for incompatible software, underutilised one-purpose servers, and messy software dependency hierachies.
 
I'm not saying the Oglesby-Vision (tm) will definitely not happen - if nothing changes at any of the other various layers of the IT cake, it most certainly might - I just think that it's a little bit ehm... presumtuously tunnel-visioned... to say that all of IT _will_ transform from the smallest piece of PCB upwards...
Peter Ghostine wrote The Three Virtualization Layers
on Mon, Aug 28 2006 11:00 PM Link To This Comment
Very good article, but it would be useful to quote some of the sources which you've based your research on, as well as the articles and papers in which some of these ideas have already been formulated.

Here's an interesting paper that was recently sent to me by Rick Mack: http://www.ecsl.cs.sunysb.edu/tr/TR189.pdf <http://www.ecsl.cs.sunysb.edu/tr/TR189.pdf> . It discusses the concept of FVM (feather-weight VM) which clearly demonstrates the feasibility and benefits of what you're trying to outline in your diagram. Unlike hardware virtualization, FVM is an O/S virtualization scheme that uses namespace virtualization as the key ingredient for allowing multiple VM's to be logically isolated from each other and the host O/S while at the same time sharing as many of the host O/S's resources as possible. In a nutshell, this is similar to what Softricity does for applications, but on an operating system scale. With FVM, all resources including files, registries, named objects, IP addresses, you name it, are virtualized. Moreover, interprocess communications and windows messages are intercepted and redirected to the proper VM, thus achieving very strong isolation and reducing the per-VM overhead to a minimum.

You also mention in your article that "virtualization is just another way to supply hardware to an operating system and its applications". I believe that virtualization really transcends this limited definition. This is just one facet of virtualization. Virtualization will actually permeate all physical and logical layers of an IT infrastructure:

1. It supplies hardware to an operating system.
2. It supplies an operating system to an application.
3. It supplies an application to a consumer.

In fact, Terminal Services is a form of virtualization. Ardence (www.ardence.com <http://www.ardence.com> ) which I think Brian happens to have mentioned once or twice before is also a form of virtualization. I'll even go as far as saying that Redirect-IT which we introduced back in 2003 is a limited form of virtualizaton that was designed to solve a specific problem (http://www2.provisionnetworks.com/solutions/redirect-it/redirect-it.aspx <http://www2.provisionnetworks.com/solutions/redirect-it/redirect-it.aspx> ). 

As far as Terminal Services, however, I look for it to evolve over the next few years in ways that will incorporate the concept of "hypervisor" or even the FVM model described in the aforementioned paper. I also believe that VDI and Terminal Services will eventually converge into a single solution that offers robust isolation capabilities while maintaining the strong economies of scale that TS delivers today. 

Now let's talk about VDI for a moment, specifically, what the pie-in-the-sky VDI solution would look like from a virtualization standpoint, given the three aforementioned virtulization layers:

1. A hardware virtulization layer (VMware, hypervisor) allowing for maximum hardware utilization and the co-existence of disparate guest operating systems.
2. An operating system virtualization layer (similar to the FVM) allowing multiple similar guests to share the same operating system. Otherwise, do we really want to confine ourselves to hardware virtualization and be forced to maintain a separate 10GB disk image per VM in a VDI infrastructure?
3. An application virtualization layer (Softricity, AppStream, Altiris, Thinstall, and soon, Citrix Tarpon) that isolates the guest O/S from the app and other running apps.
 
The interesting thing about these 3 layers is that one could easily mix and match (i.e., 1+2, 1+3, 2+3, 1+2+3). The model looks great on paper and sounds like a phenomenal concept. But with the current state of technology, each layer incurs enough overhead to make the overall solution implausible and exorbitantly expensive by today's standards.

Virtualization requires the orchestrated efforts of all vendors: the chipmakers should improve their chip architectures to yield better VM efficiencies (thin hypervisors), and the operating system vendors should streamline their operating systems and APIs to make such nifty concepts as the FVM more easily attainable.

Virtualization and the hypervisor are age-old concepts as evidenced in the following links and a myriad other resources. Things are truly starting to come full circle, aren't they?
 
http://en.wikipedia.org/wiki/Hypervisor <http://en.wikipedia.org/wiki/Hypervisor>
http://en.wikipedia.org/wiki/Virtual_machine_monitor <http://en.wikipedia.org/wiki/Virtual_machine_monitor>

Guest wrote RE: Grand scheme
on Tue, Aug 29 2006 2:21 AM Link To This Comment
Big miss on the ECM bit.
Prolly = probabely typed by a lazy person (sorry for that)
 
And Softgrid does require an OS to package the app... but it is OS independant for the execution of the application. So you can run windows apps on a linux box. Basically allowing for really downgraded OS's. The only thing you need is to harnass the processing power. VMWare's ESX provides that..
Now see the processor manufacturers add more and more stuff as firmware. (there's even a Proof-of-concept-exploit out for AMD processors!!).
So there will always be something that provides acces to the hardware... But I see this happening in the form of open standard firmware.
Basically removing the need for an OS.
 
And indeed this will not happen overnight... just like the paperless office. Though we use paper because we are human and have a tough time to change... hardware and software have no such restrictions.
 
$0.02
Guest wrote RE: The Three Virtualization Layers
on Tue, Aug 29 2006 3:39 AM Link To This Comment
Not only is this not new but it's already being developed and has been for a few years.  In fact Microsoft has put "enlightenments" into the OS to make it run better on a hypervisor style implementation.  The following slide isn't a concept it's what is being done.
 
 
http://download.microsoft.com/download/4/1/e/41e56f34-8f90-405e-9daf-f8aeea249935/InTrack14dec_Presentatie_clean.ppt#440,35,Windows Virtualization
 
 
 
 
Guest wrote Ron forgot a few things
on Tue, Aug 29 2006 7:55 AM Link To This Comment
One thing he forgot was to mention grid and clustered hypervisors. I think this is something (like infiniband) that will really be where we are headed. Commodity boxes in a cluster format where the hypervisor is the clustered application and the hardware assists in the routing of resources.
 
Secure computing initiative is a different thing and may make it into this space in VDI (so we don't go around losing whole images or VMs), but it isn't very interesting on terminal server access. Terminal Services basically obviates the need to "Secure Computing Platforms". Which I do despise.
 
Hey, BTW, I posted a while back ago on the Citrix\Thought Leader article and my post came up missing a few days later. Does Brian edit or delete posts?
Jeff Pitsch wrote RE: The Vision
on Tue, Aug 29 2006 8:19 AM Link To This Comment
Nothing ever happens as people predict.  I posted the same thing in another thread but I'm still waiting on the paperless office promised us back in the 80's.  as well as the browser getting rid of the OS.  In 3 years time, pretty much everything will be the same.  Nothing, especially in IT, changes very quickly for the simple reason that companies never, ever adopt anything as fast as people predict.  Shoot, the past 5 years have been the year of the Linux on the desktop.. :) 
Jeff Pitsch wrote RE: Grand scheme
on Tue, Aug 29 2006 8:28 AM Link To This Comment

And indeed this will not happen overnight... just like the paperless office. Though we use paper because we are human and have a tough time to change... hardware and software have no such restrictions.

 
Actually I quite disagree with the 'no such restrictions'.  who do you think implements the hardware/software?  Humans.  Companies have spent a lot of money implementing the systems they have today.  It will take more than a few years for these things to be as most of you seem to be predicting.  Think of Y2k, how many of those systems were supposed to be around still?  How many 'ancient' systems do you still run across?  It will be a long while many of these things to be come about. 
 
Hardware will always require an OS.  You can't get rid of the OS.  I have to disagree on the open standard firmware as well.  Let's us Linux as an example of an open 'standard'.  You can't even take a package for linux and expect it to run across all flavors.  Each vendor will have their own flavor of this 'open standard' (if this ever actually comes to be which I doubt) making it, in reality, not open and hardly a standard. 
 
 
Guest wrote RE: Grand scheme
on Tue, Aug 29 2006 9:30 AM Link To This Comment
Humans implement it...
But they are not subject to change themselves.
If you look at the speed of all the virtualisation changes lately I do believe we will have some form of OS independant processing.
 
That'll basically be something like gridcomputing. You'll have a pool of processing power to tap into, a pool of memory to use and a pool of storage. Apps packaged with softgrid who already have all the OS they need baked in will make use of this.
 
But then again only time can really tell wether this is going to happen. Perhaps I am completely wrong... perhaps not.
Jeff Pitsch wrote RE: Grand scheme
on Tue, Aug 29 2006 9:54 AM Link To This Comment
Grid computing has been talked about for years and what have we got so far?  Again, large paradigm changes hardly ever happen in any sort of timely fashion.  Virtualization has taken off as many hot technologies have in the past.  In time, they cool down and things get back to normal.  Browsers, java, terminal services, etc etc etc.  Things will change, no doubt about that but to the extent that within a few years everything we know today is thrown out the door, no way.  Companies and vendors have way to much invested in the 'today'.  The speed with which virtualization has happened is the same speed many technologies have taken off, this is simply history repeating itself.  Does anyone truly believe that companies (and vendors) are going to spend millions of dollars ripping everything out and completely replace their systems?  It hasn't happened before what makes anyone think it will happen this time?  Shoot, I'm still waiting on the promise of java of write once, run everywhere......
Guest wrote RE: Grand scheme
on Tue, Aug 29 2006 9:55 AM Link To This Comment
Wrong again!!! You can't run a SoftGrid app on anything but Windows. Take your $.02 back please.
 
It spelled "probably".
Guest wrote RE: Grand scheme
on Tue, Aug 29 2006 9:57 AM Link To This Comment
I meant it's spelled "probably". You got me fumbling
Ron Oglesby wrote RE: The Vision
on Tue, Aug 29 2006 10:00 AM Link To This Comment
ORIGINAL: Guest

Sorry but i think it's a bit rich to say this 'will' happen. i think there's a good chance that we're all in for a rude shock in 3 years time when other solutions competitive to virutalisation at the OS level take hold and rumble straight past anything other than the server-side of virtualisation. Maybe that's a few years later or never, who knows.

The point is, let's just remember for a moment that virtualisation as it stands at the moment is a messy-but-effective workaround for a problem caused by bad software design at the OS, service and application level. This is the reason for incompatible software, underutilised one-purpose servers, and messy software dependency hierachies.

I'm not saying the Oglesby-Vision (tm) will definitely not happen - if nothing changes at any of the other various layers of the IT cake, it most certainly might - I just think that it's a little bit ehm... presumtuously tunnel-visioned... to say that all of IT _will_ transform from the smallest piece of PCB upwards...

 
the "will happen" portion is that Server/hardware Virtualization WILL become a commodity. That WILL HAPPEN. no way about it. to many people working on virtualization technologies for it not to happen.  Other things mentioned in the articles were about how it will be used when it is a commodity and less expensive at the desktop level. I used a desktop as an example of hypervisor since (once it is free/cheap) it will be very easy to implement for numerous reasons.
 
Application virtualization is another topic. I believe that Softgrid (with MS) will have a huge impact on that and how we run our apps. BUT, to hear "for a problem caused by bad software design" we have to remember that when we were 16 bit everyone said wait till the apps are 32 bit, now that we are 32-bit its wait till the apps are virtualized or 64 bit... I am still waiting, and have been since 1997 :-D, Virtualizing server applications (SQL, Exchange, web, etc) is far enough out that people will continue to virtualize hardware, and with IT becoming more of a trade with lots of 'average' people becoming programmers and app designers, the quality of the apps and their design just inst going to change overnight.
 
 
 
 
 
Ron Oglesby wrote RE: The Three Virtualization Layers
on Tue, Aug 29 2006 10:09 AM Link To This Comment
ORIGINAL: Peter Ghostine

Very good article, but it would be useful to quote some of the sources which you've based your research on, as well as the articles and papers in which some of these ideas have already been formulated.



The concepts in this article came really from my 4 years of experience with Server and app virtualization. THe concept fo the hypervisor isnt new, and I am NOT saying it is. I am just giving my vizion of where server virtualization is going. I think most people missed the point of the article. THe point of the article is that an a few years server/hardware virtualization will be a commodity.

As for research, the "research" for this article (besides my white board) was some googling looking for virtualziation at the hardware level Just to make sure I wasnt missing the boat, and ensuring that my idea that some type of software would be needed to tie all the hardware componnenets together etc.

Anyway, this article was about Server Virtualization and how it will become a commodity. I think people are reading the desktop stuff and using it as a soapbox instead of realizing my point that is the hypervisor and its mgmt tools that will ship with hardware will be free or close to free in the future. As things calm down this will just be another part of our IT landscape.
 
Really, this artcile took 2 hours to write guys with 30 minutes being googling, 30 minutes of some white boarding and an hour of typing, all over some beers. I dont expect to change the world or express thoughts that have never been heard of in that time. But I can point out some things that most IT guys are missing as they jump up and down on virtualization.

Ron
Ron Oglesby wrote RE: The Three Virtualization Layers
on Tue, Aug 29 2006 10:20 AM Link To This Comment
ORIGINAL: Guest

Not only is this not new but it's already being developed and has been for a few years.  In fact Microsoft has put "enlightenments" into the OS to make it run better on a hypervisor style implementation.  The following slide isn't a concept it's what is being done.


http://download.microsoft.com/download/4/1/e/41e56f34-8f90-405e-9daf-f8aeea249935/InTrack14dec_Presentatie_clean.ppt#440,35,Windows Virtualization


 
Right, but its still not jsut what I am talking about. I am a virtualization guy (beleive me with over 20 virtualization projects and hundreds if not thousands of P2Vs I get the hypervisor and ring level 0/1/3/5 issues). My point int he article was nothing more than it will be ship with hardware and be on anything you want it to be on, from laptops to servers. If you want to enable it you will, if you dont, you wont.
 
This slide shows a common issue being addressed through a combination of software and hardware (ring level 0 calls). Everyone (not just MS) is attempting to address this and get perfromance it. And that is the point. everyone is addressing it. Just like hardware, once everyone is making it it will all become very close in perfromance and other metrics, then it will be about mgmt tools and preference.
 
Ron
Guest wrote RE: Grand scheme
on Tue, Aug 29 2006 11:36 AM Link To This Comment
Great article Ron. In addition to running VMs on desktops, there will be service layers that run parallel to the VMs. Several service layers could be built to allow VMs to securely communicate with each other. In other words, the desktop O/S could turn into a series of service layers.

Completely agree with Jeff. The architecture makes sense so it will evolve towards this model - but it will be gradual and incremental.

It is always interesting to discuss that who will rule the world; however, my opinion is that there will be several desktop delivery architectures that will continue to function in parallel - this is just another one that makes sense for certain users, devices or networks!
Peter Ghostine wrote I'm buying at VMworld 2006
on Tue, Aug 29 2006 11:37 AM Link To This Comment
Ron,
 
If you plan on being at VMworld 2006, let's have a couple of beers.
 
Peter
Guest wrote RE: The Vision
on Tue, Aug 29 2006 11:37 AM Link To This Comment
That's exactly my point - nothing ever happens as anticipated. While right now virtualising everything from true type fonts to servers is in vogue it still is not _the_ perfect solution where i'm thinking - wow - i wouldn't mind if things stay like that for the forseeable future.
 
Virtualising a server to run multiple instances of in most cases the same OS is clunky, virtualising an app to shield it and the OS respectively from the features and functions the OS is supposed to provide the app with, also doesn't strike me as a particularly neat long term solution.
 
Yes, the above are right now legitimately the non-plus ultra in terms of design. I implement Softricity, VMWare, etc with great delight and success. But to me this is merely an intermediary step for where software is going.
 
I'm sure lots of the good bits of resulting functionality (please note the word functionality and not design or technology) that come out of virtualisation today will find their way into future development frameworks, OS'es, hardware and the like. And as it's currently in vogue they'll be probably called virtualised-this and virtual-that, but in the future they'll really have little to do with present-day termed virtualisation or what actually has anything to do with that damned word ... 'virtual'.
 
Then again, as said nothing ever happens as anticipated - so maybe in 2010 we'll all have pc's split into 72 virtual appliances with virtualised apps running within those virtual appliances running within a VDI which the user will presumably connect to via a virtual 802.11z wireless connection (i can't see the radio waves so surely they must deserve the term virtual...) straight into the physical brainstem...
 
Now please excuse me as i cry myself to sleep as i have heard, said and read the by now meaningless MARKETING TERM 'virtual' 100+ times today...
Ron Oglesby wrote RE: I'm buying at VMworld 2006
on Tue, Aug 29 2006 11:45 AM Link To This Comment
ORIGINAL: Peter Ghostine

Ron,

If you plan on being at VMworld 2006, let's have a couple of beers.

Peter

 
I am always at Vmworld! and if youhave ever seen me at a conference I am always up for a couple of beers. Meet me at the RA booth anytime.
 
Ron
Guest wrote RE: The Three Virtualization Layers
on Tue, Aug 29 2006 8:08 PM Link To This Comment
ORIGINAL: Peter Ghostine

You also mention in your article that "virtualization is just another way to supply hardware to an operating system and its applications". I believe that virtualization really transcends this limited definition. This is just one facet of virtualization. Virtualization will actually permeate all physical and logical layers of an IT infrastructure:

1. It supplies hardware to an operating system.
2. It supplies an operating system to an application.
3. It supplies an application to a consumer.


 
oh my god as if i wasn't sick enough of that word... now everything that used to be called logical separation, colocation, centralisation, partitioning, etc is being _renamed_ virtualisation without actually changing anything... why? ...hmmm well not really sure why... maybe i'm the only one in the world who's sick of this word and everyone else is having way too good a time selling the same old sh*t with a new sticker...
 
hey i guess if TS and Citrix is virtualisation, then why not make everything virtualisation... logical partitioning of hdd's - well that's GOTTA be called virtualisation...
multitasking... one cpu running multiple apps seemingly at precisely the same time... virtualisation right there...
Guest wrote It's all about control
on Wed, Aug 30 2006 8:41 AM Link To This Comment
Good article, Ron.
 
Another thought to throw out into the mix is these fancy USB sticks.  Think about putting software on that that uses OS features but doesn't mess with it.  For example, my skype client.  I just plug the stick into whatever computer I'm near and run it with all my account and contact info onboard, use the software, then unplug it when I'm gone.  Yeah, it isn't softricity like virtualization, (nor virt machines) - the app needs to be written to work this way - but with density improving these might become virtual machines on their own.
 
You might start with linux as a base OS, just because people will develop free apps.  But I think that efficiencies could lead to app vendors pulling a light-weight OS (as opposed to a heavy-weight general purpose OS) into the app and delivering as a virtual machine. (News flash:  real-time OS vendors may have a market after all...)
 
Forget installation, man (to steal a line from David G)!  How about "office on a stick"? Or whatever.  Plug into whatever (hypervisor capable) device you are near and work away.  They key part of your article is that security VM.  Let me plug in whatever I want to run and that VM will make sure I can only mess up my VM.  Also make sure that whatever is in the "trusted" host can't mess with me. 
 
We still got issues with all the data (since nobody known how to deal with that yet), but not bad. 
 
My $.01 (it lists as two cents, but I discounted it since you already bought me a beer).
 
tim
Guest wrote vmware too slow
on Wed, Aug 30 2006 11:42 AM Link To This Comment
All I know is that I run vmware 2.5.1 esx on two ibm x445 16gb ram and quad processor, everytime I build a new server for a project it eventually gets requested to be moved off to a physical server for performance issues.  Therefore vmware is only good for test/dev and servers that use hardly any cpu usage, ie citrix license server.  For all that money for I could of bought 20 blade servers! What I get from vmware is just quick deployment for a test environment for developers.  I dont think its worth the money, virtualization has a long way to go on the server level, and I hear that some companies run all their servers on VMWare!  Yeah right!
 
Guest wrote RE: The Three Virtualization Layers
on Fri, Sep 1 2006 1:22 AM Link To This Comment
I would agree that Citrix/TS is NOT virtualization.  I would classify "Virtualization" as it has always been; performing a processor level abstraction for operating systems and applications.  As an example Virtual86 mode which is used on 32 bit machines to run 16 bit applications.
 
I would then classify three classes of implementations which would be "Virutalization" and "Emulation".  Virtualization is where the hardware itself is able to support this abstraction or at least some or all of the instructions are able to run natively on the machine.  Perhaps that's actually a hybrid soluation as the old VM's did.  Pure "Virutalization" I would classify as supported by the hardware fully or with software support.
 
TS does neither of these and all it does is make a mult-user OS.  Unix was multi-user since the 60s, so is Unix now virutailzation?  No, it's a multi-user Operating System and each user contains the nessecary boundaries from the next user.  Nothing is virtualized, so everyone should just stop with the Citrix/TS Virutalization, IT IS NOT VIRTUALIZATION IT"S MULTIPLE USER SUPPORT ON AN OS!
 
 
Ron Oglesby wrote RE: vmware too slow
on Fri, Sep 1 2006 1:02 PM Link To This Comment
ORIGINAL: Guest

All I know is that I run vmware 2.5.1 esx on two ibm x445 16gb ram and quad processor, everytime I build a new server for a project it eventually gets requested to be moved off to a physical server for performance issues.  Therefore vmware is only good for test/dev and servers that use hardly any cpu usage, ie citrix license server.  For all that money for I could of bought 20 blade servers! What I get from vmware is just quick deployment for a test environment for developers.  I dont think its worth the money, virtualization has a long way to go on the server level, and I hear that some companies run all their servers on VMWare!  Yeah right!


 
Thats kind of odd. but that Yeah Right, is right. There are a large number of companies running their server (or even a large % of their servers) on VMware. Maybe not all their Citrix servers, but others. Generally we find that about 60-65% of servers in any large environment could become VMs without perfromance degredation or the need to re-architect.
 
On the IBM side I woudl say you purchased too much server (cost per VM is too high). Sorry for your bad experience, but its not the norm.
 
Ron
Jeff Pitsch wrote RE: vmware too slow
on Fri, Sep 1 2006 1:24 PM Link To This Comment
To follow up on what Ron has said, I find that most environments are so over resourced on their terminal services environments that they could easily eliminate 25-40% of the serves and still run a good productive environment.  What does this mean?  Well, when they move to VM's they say 'see, this runs great'.  well they were never taxing their servers to begin with so guess what, performance will be the same.  Instead of going the ESX route and spending more money on that product, they could have easily reallocated servers they didn't need and push their terminal services environment a little harder.  that is a bigger cost savings because you aren't spending any more money.
 
It all comes down to that many companies never stress test their servers to see how much capacity they actually have.  they look at the servers and say, oh I've got 50 users on here that must mean I need to add more ahrdware.  In reality there systems may not require it.
Guest wrote RE: vmware too slow
on Mon, Sep 4 2006 8:22 AM Link To This Comment
ORIGINAL: Guest

All I know is that I run vmware 2.5.1 esx on two ibm x445 16gb ram and quad processor, everytime I build a new server for a project it eventually gets requested to be moved off to a physical server for performance issues.  Therefore vmware is only good for test/dev and servers that use hardly any cpu usage, ie citrix license server.  For all that money for I could of bought 20 blade servers! What I get from vmware is just quick deployment for a test environment for developers.  I dont think its worth the money, virtualization has a long way to go on the server level, and I hear that some companies run all their servers on VMWare!  Yeah right!



I agree with a lot of this, VMware is far too slow for things like SQL and Exchange servers, its fine for a small number of users but if you have 100 users or more don't even bother trying to run Exchange or SQL on a VM.  For Citrix its fine for around 10 users per VM.  It's also OK for domain controllers that are not doing too much apart from authentication.  DNS servers are also OK and web servers to some extent although not ones that will be accessed by a large number of users.  I think the point about VM is that some applications and technologies work well but others do not.  You simply need to find which applications will run as well on a vm as on a physical box.  In our environment we have a mix of VM and Physical with VM's around 30-40% of our environment but I do not think we'll ever be 100% VM unless the performance of certain applications improves dramitically under a VM.  Can you really imagine 1000 users all accessing Exchange mailboxes on a VM? not going to happen...
323291
Guest wrote RE: vmware too slow
on Wed, Sep 6 2006 3:10 PM Link To This Comment
quote:

ORIGINAL: Guest

All I know is that I run vmware 2.5.1 esx on two ibm x445 16gb ram and quad processor, everytime I build a new server for a project it eventually gets requested to be moved off to a physical server for performance issues.  Therefore vmware is only good for test/dev and servers that use hardly any cpu usage, ie citrix license server.  For all that money for I could of bought 20 blade servers! What I get from vmware is just quick deployment for a test environment for developers.  I dont think its worth the money, virtualization has a long way to go on the server level, and I hear that some companies run all their servers on VMWare!  Yeah right!




I agree with a lot of this, VMware is far too slow for things like SQL and Exchange servers, its fine for a small number of users but if you have 100 users or more don't even bother trying to run Exchange or SQL on a VM.  For Citrix its fine for around 10 users per VM.  It's also OK for domain controllers that are not doing too much apart from authentication.  DNS servers are also OK and web servers to some extent although not ones that will be accessed by a large number of users.  I think the point about VM is that some applications and technologies work well but others do not.  You simply need to find which applications will run as well on a vm as on a physical box.  In our environment we have a mix of VM and Physical with VM's around 30-40% of our environment but I do not think we'll ever be 100% VM unless the performance of certain applications improves dramitically under a VM.  Can you really imagine 1000 users all accessing Exchange mailboxes on a VM? not going to happen...
 
Have you ever called VMware support? They claim they run all their servers on VMWare, boy their systems must be slow!   I even went to a vmware demo once and the presenter even said "With vmware you are not going to save money on hardware costs, to me you are saving on deployment scenarios and easy Disaster Recovery situations!"  Which I totally agree with!  I just hear so much talk hooplah about VMWare nowadays, everyone is using it, that is the way to go! Maybe it is when performance gets better but there is a long way to go and cost has to come down for both a quad box and the ESX software for it to worth it.
Guest wrote Firewall layer updated by AV Vendors? Just Nuts!
on Fri, Sep 8 2006 11:42 AM Link To This Comment
Updates from Symantec or any AV vendor to the core layer of what controls your entire PC and all your data?  Me thinks that is very
frightening! Ever have an AV update totally mess up your server/network.  It has happened too many times for me to count to me.
Jim Kenzig
http://www.thinhelp.com
 
Guest wrote Oh and Hey Ron ... been talking to Gartner Lately?
on Fri, Sep 8 2006 12:28 PM Link To This Comment
Kind of funny(interesting, suspicious, fill in the blank) this article was posted the same day as Rons...hmm
Gartner: Windows Vista the Last of its Kind
http://www.pcworld.in/news/index.jsp/artId=4161536
Ron Oglesby wrote RE: Oh and Hey Ron ... been talking to Gartner Lately?
on Sun, Sep 10 2006 9:58 PM Link To This Comment
ORIGINAL: Guest

Kind of funny(interesting, suspicious, fill in the blank) this article was posted the same day as Rons...hmm
Gartner: Windows Vista the Last of its Kind
http://www.pcworld.in/news/index.jsp/artId=4161536

 
No, not gartner that I know of. I have talked to a lot of industry people from different hardware and software vendors and consult with them on ideas and stuff... I am sure I am not the only one thinking the way I am.
 
Ron
Guest wrote RE: vmware too slow
on Tue, Sep 12 2006 8:28 AM Link To This Comment
I think some of what has been said is true, organisations do typically over spec servers for the workloads. That said CPU bandwidths are getting so large that most organisations infrastructure will not consume an entire machines capability. With respect to Exchange and SQL as VM's - I have deployed a centralised Exchange 2003 SP1 environment with 2 mailbox servers, hosting 3000 mailboxes each, supporting Outlook 2003 clients in cached mode. Customer is very happy with performance, stability and recovery capability. This same environment runs in the Data Centre 12 ESX servers on 2 ways with 8GB ram supporting 120VM's including the Exchange servers above. So I think it comes back to infrastructure design, planning and sizing workloads and configuration of applications. Considering the benefits of VM's from a DR perspective and flexibility a VM as first option policy makes a lot of sense.
Guest wrote RE: The Three Virtualization Layers
on Sun, Sep 24 2006 11:47 AM Link To This Comment
hey do you know what operating system madden 05, 06,nad 07 using? hint me up back @ fire_blaze_hot@yahoo.com
 
Guest wrote RE: Very interesting!
on Fri, Sep 29 2006 2:22 AM Link To This Comment
Funny to see that this seems to be future for you people. All this technology is there for a long time. If you look at the IBM platforms System i, System z then you will see that this is already in place for many years. Talking about the AS/400 from 18 years back the whole system was already based on virtualisation.
 
I have been very enthousiastic about all this so I understand the enthousiastic reaction of everyone for VMware but don't make it bigger then it is.
Guest wrote RE: Very interesting!
on Mon, Oct 2 2006 2:44 AM Link To This Comment
Yes we know virtualisation has been available for many years for expensive proprietery platforms .. when you write the O/S AND make the hardware ( think IBM mainframe ) and you release a new version of it ( think System 390 ) that no one actually needs the power of for 1 application you carve it up into little bits so that they have a reason to buy it ( think O/S 390 ).
 
The big deal is that Intel servers have come of age, are much cheaper than the platforms you cite, and in some cases much faster.
 
That's the real point - this IS a huge deal - it allows the rest of the world to save money and virtualise - and completely changes the way in which I.T inside companies can manage, and deliver services built on these platforms.
 
The article is entirely correct - as the hypervisor becomes more and more mainstream it will become commoditised, with hypervisors that contain unique and valuable features being able to command a premium - but the key to all this will be the management tools, as a huge part of the cost savings reliased by companies is with the management and deployment.
 
In 2-3 years the I.T landscape inside all large corporates will have been changed fundamentaly by this, and we will be well on the way to seeing the SMB market changed also.  This is hear to stay - and thank goodness as it was about time !!!
Guest wrote RE: vmware too slow
on Wed, Oct 4 2006 12:00 AM Link To This Comment
Frankly with dual core and soon quad core compute density easily supports pretty darn near any workload you can imagine running under ESX. I have 3 years experience and lot of Enterprise installs done with frequent level of penetration of applicability of virtualization running over 90% and several large corps with 100% for 3 years now. We are now counting continuous uptime in compute years on sites. We have 1000 user full desktop replacement terminal server and citrix environments, sql server, exchange, BEA, Oracle from test/dev (of course) through full Prod. Of course we see a lot of ill conceived installs/designs done by Boutiques with no idea of end to end engineering design, lack of SAN understanding and general integration issues found in many phhysical environments. Real life savings run to 7 figures on pure hard capital dollars. It's a swiss army knife of functionality. Get with the program, do your research, appproach it with a view to making it work and it does. Every time. Stop blaming the tool. Overhead on CPU is typically less than 4% (who cares?) and IO can in many cases be faster. With 4GB HBA's, 32GB memory, 8 GigE, properly tuned systems we can well exceed the operational and risk densities at a technical carrying capacity level today. So at a conservative 20-25:1 consolidation ratio management and the business (remember the reason you are employed?) have a compelling reason to think this makes sense. It does.
 
By the way the 44x issue is a frequently discussed one and a red herring... Check out the user groups. Lots of misconfiguration on that box that REALLY reduces carrying capacity. Counterintuitive effects. However it's not a Vmware issue so don't jump to the wrong conclusion based on a sample size of one.
 
 
Guest wrote RE: I'm buying at VMworld 2006
on Wed, Oct 4 2006 12:14 PM Link To This Comment
I'll also be at VMWorld 2006.  I will be giving a customer session about Disaster Recovery. 
Guest wrote Last but hopefully not least ...
on Tue, Oct 10 2006 5:19 PM Link To This Comment
As this thread nears the bottom of Brian's website I believe that I should pipe in while I can.
 
Ron, I'm sure you remember or have at least heard of the <a href="http://web.cecs.pdx.edu/~trent/gnu/hurd/hurd-paper.html">GNU Hurd operating system</a>. I'm not trying to Troll or be a jerk. Matter o' fact I believe we met face to face at iForum 2005. Regardless, this form of Operating System design has been around for a long time. The dynamic loadable module design is a good one but it requires alot of horsepower to run.
 
I would say modern CPU design is not there yet to support such a software infrastructure. The closest CPU to such a design is the <a href="http://www.realworldtech.com/page.cfm?ArticleID=RWT090406012516">Sun Niagra CPU.</a>
 
Memory parrallelization and many other infrastructure level design factors need to change before this can happen.
 
So, do we all see it coming? Sure, with dual and quad core processors either already here or here for Christmas time, we are seeing the parrallel features you need in the infrastructure. But, as for the design you set forth (maybe too many cold ones ;) ... ) I believe the secure computing platform (the antivirus and security arbitrator layer you refered to) will actually be inside firmware and not above the hypervisor. Anything (layers) above the Hypervisor will add huge amounts of latency and overhead to such a system. Therefore, the Hypervisor layer will have to take care of some of the functions while stack level firmware must arbitrate security to the bare metal.
 
Sadly, this will mean the death of alot of really neat easily integrated features at those levels. Much of the innovation for our industry is at the stack (firmware) level. Instant-on OS and embedded systems are a boon to many of us. I hope that in the rush to "Mainframe" (I know ... I am using a noun as a verb) the Wintel infrastructure, that much of the innovation that has gotten the platform this far does not disappear.
 
Oh well, so I should stop whining and try to give a lame little visual of what I think the whole software stack will look like:
 
 
 
display software (screen scrape or application stream ... ATI and AMD have some cool tech in this area) 
      |
loadable modules (Virtual Machines)
      |
hypervisor
      |
firmware (extended via enhanced bios\embedded operating systems)
      |
bare metal
 
What do you think?
James Cabe wrote RE: The Three Virtualization Layers
on Wed, Oct 11 2006 9:57 AM Link To This Comment
I put this below, but I was intrigued by your comments on FVM. As many can tell the OSS scene is based on tons of research by ATT, Bell Labs, NCR, and the government ... colleges, etc etc. So they are far more elegant and advanced than Microsoft even. I believe this is where the market is obviously headed but I disagreed with some of Ron's article (it was a good one though .. and spurred discussion).
 
I'm sure you remember or have at least heard of the '>GNU Hurd operating system. I'm not trying to Troll or be a jerk. Regardless, this form of Operating System design has been around for a long time. The dynamic loadable module design is a good one but it requires alot of horsepower to run.
 
I would say modern CPU design is not there yet to support such a software infrastructure. The closest CPU to such a design is the Sun Niagra CPU.
Memory parrallelization and many other infrastructure level design factors need to change before these industry changes can happen.
 
So, do we all see it coming? Sure, with dual and quad core processors either already here or here for Christmas time, we are seeing the parrallel features you need in the infrastructure. But, as for the design you set forth (maybe too many cold ones ;) ... ) I believe the secure computing platform (the antivirus and security arbitrator layer you refered to) will actually be inside firmware and not above the hypervisor. Anything (layers) above the Hypervisor will add huge amounts of latency and overhead to such a system. Therefore, the Hypervisor layer will have to take care of some of the functions while stack level firmware must arbitrate security to the bare metal.
 
Sadly, this will mean the death of alot of really neat easily integrated features at those levels. Much of the innovation for our industry is at the stack (firmware) level. Instant-on OS and embedded systems are a boon to many of us. I hope that in the rush to "Mainframe" (I know ... I am using a noun as a verb) the Wintel infrastructure, that much of the innovation that has gotten the platform this far does not disappear.
 
Oh well, so I should stop whining and try to give a lame little visual of what I think the whole software stack will look like:
 
 
 
display software (screen scrape or application stream ... ATI and AMD have some cool tech in this area) 
      |
loadable modules (Virtual Machines)
      |
hypervisor
      |
firmware (extended via enhanced bios\embedded operating systems)
      |
bare metal
 
What do you think?


Guest wrote RE: The Three Virtualization Layers
on Wed, Oct 11 2006 3:58 PM Link To This Comment
I would say modern CPU design is not there yet to support such a software infrastructure. The closest CPU to such a design is the

 
Responding to this.... The Sun Niagra CPU is by far not such a spectacular processor if you would look closer to it's design. Secondly we better not be dependent on the future development of their processor line. Why do you think they are going in the direction of the opteron processor. A far better design will you find in the Power 5 processor of IBM. This has a future roadmap and has already a lot of parallelisme in it like real Symultanius Multithreading and not as sun just normal Multithreading. Meaning 2 processors running 4 threads at the same time. Sun 8 way processor block only running 8 threads and loosing processor time switching between threads if you use multithreading on each processor.
James Cabe wrote RE: The Three Virtualization Layers
on Wed, Oct 11 2006 4:45 PM Link To This Comment
Please don't Troll. You could go all day long about processor architectures, but the market is heading the way of the Cell, Niagra, Terascale, Optera (AMD's soon to be announced super multi-core proc). So, Sun was the first to develop it so I'm going to give them the credit for the processor design. I'll guarantee that many smaller cores ( 8 to 16 .45nm cores on a single package) is the next step in this evolutionary change. This allows all of that pretty little software like Pacifica to be realized.
 
So while I agree that the Sun processor itself isn't a power house. The design of which will be used in the new infrastructure supporting this new software stack.
Guest wrote funny
on Wed, Nov 8 2006 9:03 AM Link To This Comment
it's kinda funny really when you think about the whole vm ware thing LOL
Guest wrote [Deleted]
on Wed, Nov 15 2006 1:53 AM Link To This Comment
Guest wrote Web Seminars
on Fri, Feb 8 2008 12:22 PM Link To This Comment
If any of you experts are interested in giving one or more online presentations on virtualization, please email me. It's a very hot topic and the seminars would be well marketed, so there is no doubt that you will do very well. Thanks. ehowton@tbdnetworks.com

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.