What is MinWin, and what (if anything) does it mean to us?

Listen to this podcast

With the release of Windows 7 and and Server 2008 R2, MinWin has received a bit more press, and with that comes some intrigue from people like us who always want to find a way to do things better (or at least differently).

Last week I read and tweeted a little bit about MinWin, which is the project name Microsoft has given to very basic kernel and OS components that make up Windows Vista, Windows 7, and Windows Server 2008 (including R2).  With the release of Windows 7 and and Server 2008 R2, MinWin has received a bit more press, and with that comes some intrigue from people like us who always want to find a way to do things better (or at least differently).  The MinWin effort at Microsoft has resulted in what we see as the Windows Server 2008 Server Core, but the project goes beyond Server Core.

To get to know MinWin, we should take a look at its history before taking a look at its potential future. 


There's a fantastic article at ars technica about MinWin (really, what do those guys do that isn't fantastic?), so if this little rundown isn't enough for you, jump over there and check it out. 

Microsoft started considering a minimal Windows package back with Windows Server 2003, attempting to make it more like Linux with it's independent runlevels that built upon each other to provide only what is essential.  What they learned was that they had so many functions linked in from so many places, the number of core files numbered in the thousands (with each file containing many, many functions).  For instance, updating your box couldn't be done via the CLI because there was no way to launch Windows Update and select the patches to download, and some fundamental system services relied on the GUI to present messages to users. Add to that the many other functions that couldn't be accessed from a command line (or better yet - configured WITHOUT a GUI), and there was seemingly no way that Microsoft was going to get to a minimal build.

After Server 2003 was launched, Microsoft turned their focus to their MinWin project. The project had the stated goal of mapping all of the OS components (5500+!) and separating them into layers.  Each layer would only depend on the layers beneath it, except, of course, for the first layer--MinWin.  On top of that, you could add features, Active Directory or a Print Server, for instance.

At this point, you're saying "Uhh, yeah, that's Server 2008 Core," and you'd be right.  Layers are analogous to roles, and Server 2008 Core is a result of the MinWin philosophy that Microsoft has (but I need to make it clear - Server Core is NOT MinWin).  The problem is that even Server Core is still a comparatively large installation, numbering over 600 files and consuming less, but not as few as you'd expect, resources.  The reason for this is that the core DLL's (KERNEL32.DLL, USER32.DLL, and GDI32.DLL) that every application on every Windows box use are loaded with various functions (over 6000), some related and some not.  The ars article gives the example of ADVAPI32.DLL, which is responsible for both managing the services on the local machine as well as domain interaction, two unrelated things.  Since one of those things (services) would conceivably belong in the Server Core, the other also has to be there, without being used. 

While Microsoft is locked into using those DLL names forever, the nice thing is that they only need to be there in spirit. The functions of the DLL's can be moved into other files that are more layer-oriented, say, one for services, one for domain interaction, etc...  An application would still call ADVAPI32.DLL, but all that would come from ADVAPI32.DLL would be a reference to the actual DLL with the function inside.

It seems simple enough, but there are over 6000 functions that need to be organized and prioritized into different layers (can we please just call them RunLevels? It's ok...we won't call it WINux).  Enter what Microsoft is calling "API Sets," which at a super-high level is a solution that groups those 6000 functions into 34 different types and runlevels, and maps calls to legacy DLL's to their new, layered counterpart with the help of another DLL - APISETSCHEMA.DLL.  This results in the ability to keep the existing, legacy DLL's while organizing and layering the functions contained within them in order to provide a streamlined base OS, MinWin, and the dependent layers above it.


Now that we know a little about the history of MinWin and how it works, it's my hope that we as an industry can start to think of ways to use this to our advantage. Personally, I can think of a few ways to use MinWin:

Layering (in the buzz-word sense)

A month or so ago, I wrote about layering and why the technology could be of significant importance to us.  My message at the end of the article was that Microsoft should embrace layering and tweak Windows so that we didn't have to shoehorn layers into place around the existing system.  In the comments (before they were turned off after descending into one of the more epic flame wars on BrianMadden.com), there was talk of making a two-dimensional layer solution that incorporated vertical slices along with the horizontal layers that we typically consider, as well as how the concept is too immature to be of any real value at this point.  Some people are really geeked about it, while others dismiss layering as a silly niche concept.

So, can MinWin help with layering? Can it serve as the base for the compartmentalized OS that fans of layering have been looking for?  What about as the base for the two-dimensional, checkerboard approach suggested by Tim Mangan that separates the Applications and the Application Related Data?


This one came to me as I was writing this, so it's a bit more abstract, but a few month ago, I wrote an article about the growing problem in the management of VDI or SBC-based desktops. While organizations traditionally have had Server Teams and Desktop Teams, the lines are blurred when it comes to VDI and SBC. Using MinWin as a base and adding layers (I mean RunLevels now) to the OS could ease that pain by adopting a policy that says, for instance, server people are responsible for layers 1, 2, and 4, while desktop people are responsible for layers 3 and 5.  I suppose this would only apply in a traditional SBC environment, since VDI would presumable have the server people responsible for the server and hypervisor, while the desktop folks would own the virtual machines outright.

With Windows 7, we've seen a strong effort to be better than XP and Vista, and so far the results are positive. In part, that has to be because of the streamlining of the core OS components. My guess is that this model will continue to be implemented, and that we're probably watching what will amount to a complete tearing down and building up of Windows' core systems over the next release or two.

With that in mind, I'll turn it over to you: Do you see a future for the direction that Microsoft is taking that will be advantageous in the desktop and application virtualization space? By no means are we ready to do anything today with this, but the conversation has to start somewhere. So let's hear those thoughts and ideas!


Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

MS should do more to simplify the OS. However it's going to take years. I mean the vast majority of people will have moved to Win7 by 2014 when XP is dead officially.Win 7 is not going to solve this issue and that will be around until at least 2018, then what? Windows 8, skipped, perhaps just like Vista was unless there are real reasons to move.  Cloud based OSs that run as layers that users can just consume who knows. The point being, since MS will take so long to get to it, the smaller companies will do other things to drive mgmt costs of Windows down. It makes sense for them to do it since DT Virt is here now. It will be interesting to see what the vendors do vs. how MS responds at the OS level, and copies or buys the best vendors to ease mgmt. I wonder though how interested is MS in DT mgmt. They are so focused on data center mgmt going after VMWare, all they offer is crappy MDOP for desktops which is so far away from reality. So my bet, desktop mgmt is a big deal for people for years to come, no matter what MS does.


One advantage: a smaller footprint for devices that run the current form of Windows Embedded Standard.

In a real world scenario, the benefits of modularity are already in place with Linux and we're doing a POC to build our own thin devices around a trim Linux OS with the vWrokspace client.  I wouldn't mind doing the same thing with Windows Embedded Standard (or it's future replacement) but it still has a large footprint (and some 'legacy' problems with security) not to mention that I don't think an end-user can license it.

The OS(s) from Microsoft will undoubtedly increase in modularity in order to take advantage of new device forms.  Linux already runs on everything from a phone to some of the worlds fastest computer systems.  I don't believe Microsoft can dismiss the ubiquity of Linux on so many consumer devices and I welcome a lean & mean Windows platform.

As for Cloud OS's; I don't know a single cloud infrastructure built on the Windows platform except for Microsofts own Azure...who could afford it?

As Microsoft is so proud of saying: we listen to customer freedback...so give us a modular OS already!


Windows embedded standard 2011 might be farther along in the modular aspect. Personally I only have experience with XP embedded so really can't say one way or another. I've only keeps a cursory eye on 2011.

I do like XP embedded and feel that MS should have allowed it to be licensed with EA. That it was restricted to only OEM prevented my org from using it.


Another good source of info about minwin is Mark Russinovich's PDC talk.  Although MS is moving towards layering, getting high enough up the stack to affect us desktop/app people in a large way is a long way down the line.

At the moment MinWin will only give us a better core version and improvements to WinPE and Embedded editions.

I guess what I'm saying at the moment is:So What?  If the trend continues we might be able to take advantage in 6 years.


Saying that Server Core is a "public iteration" of MinWin might be a bit of a stretch.  Although Server Core reflects the philosophy behind MinWin and is built on top of it, Server Core is not MinWin.  MinWin is the smallest subset of Windows components that are self-sustaining and have no dependencies on higher-level components.  Basically, it's the kernel, HAL, some basic system services and a TCP/IP stack. Its entire footprint is 28 MB and it can't do much: it can't even present a command prompt.  A Server Core install obviously is much larger than 28 MB and can present a login screen, let alone a command prompt.  Furthermore, some Server Core services still have dependencies on libraries containing GUI functionality, even if that functionality isn't used.  Just wanted to clarify this to prevent any interpretations that Server Core = MinWin.


I for one support this new winlux concept, the ability to get less out of windows clients should be supported. As stated above, smaller PE and Thin images would be one great benefit.


Thanks Brad - that's a much better way of saying it.  I'll make some changes to clear it up.


With layering of Windows services, it should eventually be possible to transfer certain today's local subsystems (i.e. printing) to a cloud infrastructure. By transferring local services to a shared infrastructure, VDI images could become substantially smaller, and more secure. This concept would also be a good way to marry the flexibility of VDI with the manageability of SBC.


@daan. Good thought, however away IMO. The layering of mgmt of windows is more real and where the real cost of windows is in today's world.