With the release of Windows 7 and and Server 2008 R2, MinWin has received a bit more press, and with that comes some intrigue from people like us who always want to find a way to do things better (or at least differently).
Last week I read and tweeted a little bit about MinWin, which is the project name Microsoft has given to very basic kernel and OS components that make up Windows Vista, Windows 7, and Windows Server 2008 (including R2). With the release of Windows 7 and and Server 2008 R2, MinWin has received a bit more press, and with that comes some intrigue from people like us who always want to find a way to do things better (or at least differently). The MinWin effort at Microsoft has resulted in what we see as the Windows Server 2008 Server Core, but the project goes beyond Server Core.
To get to know MinWin, we should take a look at its history before taking a look at its potential future.
There's a fantastic article at ars technica about MinWin (really, what do those guys do that isn't fantastic?), so if this little rundown isn't enough for you, jump over there and check it out.
Microsoft started considering a minimal Windows package back with Windows Server 2003, attempting to make it more like Linux with it's independent runlevels that built upon each other to provide only what is essential. What they learned was that they had so many functions linked in from so many places, the number of core files numbered in the thousands (with each file containing many, many functions). For instance, updating your box couldn't be done via the CLI because there was no way to launch Windows Update and select the patches to download, and some fundamental system services relied on the GUI to present messages to users. Add to that the many other functions that couldn't be accessed from a command line (or better yet - configured WITHOUT a GUI), and there was seemingly no way that Microsoft was going to get to a minimal build.
After Server 2003 was launched, Microsoft turned their focus to their MinWin project. The project had the stated goal of mapping all of the OS components (5500+!) and separating them into layers. Each layer would only depend on the layers beneath it, except, of course, for the first layer--MinWin. On top of that, you could add features, Active Directory or a Print Server, for instance.
At this point, you're saying "Uhh, yeah, that's Server 2008 Core," and you'd be right. Layers are analogous to roles, and Server 2008 Core is a result of the MinWin philosophy that Microsoft has (but I need to make it clear - Server Core is NOT MinWin). The problem is that even Server Core is still a comparatively large installation, numbering over 600 files and consuming less, but not as few as you'd expect, resources. The reason for this is that the core DLL's (KERNEL32.DLL, USER32.DLL, and GDI32.DLL) that every application on every Windows box use are loaded with various functions (over 6000), some related and some not. The ars article gives the example of ADVAPI32.DLL, which is responsible for both managing the services on the local machine as well as domain interaction, two unrelated things. Since one of those things (services) would conceivably belong in the Server Core, the other also has to be there, without being used.
While Microsoft is locked into using those DLL names forever, the nice thing is that they only need to be there in spirit. The functions of the DLL's can be moved into other files that are more layer-oriented, say, one for services, one for domain interaction, etc... An application would still call ADVAPI32.DLL, but all that would come from ADVAPI32.DLL would be a reference to the actual DLL with the function inside.
It seems simple enough, but there are over 6000 functions that need to be organized and prioritized into different layers (can we please just call them RunLevels? It's ok...we won't call it WINux). Enter what Microsoft is calling "API Sets," which at a super-high level is a solution that groups those 6000 functions into 34 different types and runlevels, and maps calls to legacy DLL's to their new, layered counterpart with the help of another DLL - APISETSCHEMA.DLL. This results in the ability to keep the existing, legacy DLL's while organizing and layering the functions contained within them in order to provide a streamlined base OS, MinWin, and the dependent layers above it.
Now that we know a little about the history of MinWin and how it works, it's my hope that we as an industry can start to think of ways to use this to our advantage. Personally, I can think of a few ways to use MinWin:
Layering (in the buzz-word sense)
A month or so ago, I wrote about layering and why the technology could be of significant importance to us. My message at the end of the article was that Microsoft should embrace layering and tweak Windows so that we didn't have to shoehorn layers into place around the existing system. In the comments (before they were turned off after descending into one of the more epic flame wars on BrianMadden.com), there was talk of making a two-dimensional layer solution that incorporated vertical slices along with the horizontal layers that we typically consider, as well as how the concept is too immature to be of any real value at this point. Some people are really geeked about it, while others dismiss layering as a silly niche concept.
So, can MinWin help with layering? Can it serve as the base for the compartmentalized OS that fans of layering have been looking for? What about as the base for the two-dimensional, checkerboard approach suggested by Tim Mangan that separates the Applications and the Application Related Data?
This one came to me as I was writing this, so it's a bit more abstract, but a few month ago, I wrote an article about the growing problem in the management of VDI or SBC-based desktops. While organizations traditionally have had Server Teams and Desktop Teams, the lines are blurred when it comes to VDI and SBC. Using MinWin as a base and adding layers (I mean RunLevels now) to the OS could ease that pain by adopting a policy that says, for instance, server people are responsible for layers 1, 2, and 4, while desktop people are responsible for layers 3 and 5. I suppose this would only apply in a traditional SBC environment, since VDI would presumable have the server people responsible for the server and hypervisor, while the desktop folks would own the virtual machines outright.
With Windows 7, we've seen a strong effort to be better than XP and Vista, and so far the results are positive. In part, that has to be because of the streamlining of the core OS components. My guess is that this model will continue to be implemented, and that we're probably watching what will amount to a complete tearing down and building up of Windows' core systems over the next release or two.
With that in mind, I'll turn it over to you: Do you see a future for the direction that Microsoft is taking that will be advantageous in the desktop and application virtualization space? By no means are we ready to do anything today with this, but the conversation has to start somewhere. So let's hear those thoughts and ideas!