Can we get to a more secure desktop? A tour of all the new Microsoft security features.

Firewalls. Disk Encryption. Anti-virus. Take away admin rights. Tell people over and over to stop clicking on stuff. It's not enough. Here's what Microsoft is doing that can help.

Can you get to a more secure computing platform than this? Microsoft would like you to. Really, they would. In the Update to Windows 10 (aka build 1607) they added more capabilities for you to use, and you should be looking at them.

Ultimately, the goal any malware attack is to obtain administrative and/or system access. Anti-virus and malware protections depend upon detecting known signatures to prevent this, but they are always vulnerable to attacks using code with different signatures or on systems not yet updated with the latest list. Intrusion protection systems attempt to prevent the malware payload from getting access to the system, or they attempt to detect suspicious activity after an infection has occurred.

A more complete solution is necessary, and with the Windows 10 Update (and soon Server 2016), Microsoft is adding some interesting changes that might help us to get more secure. Keep in mind that nothing is foolproof, and you will still need to monitor, but hopefully you can prevent enough and encourage outside parties to look elsewhere.

In this article I’ll provide an overview of what is new, why it is important, and then finally who should care.

BitLocker / Disk Encryption

Whole disk encryption is about protecting the data at rest. BitLocker and 3rd party equivalents prevent modification of the disk when the computer is not running. We normally consider this in the context of having the data be stolen, but it also prevents modifications when the OS is not running. This isn’t anything that is new in Windows 10, but if you are thinking about the security of the OS you should start with disk encryption everywhere.

Virtualization Based Security

Although portions of what this article discusses may be implemented on machines that do not have the hardware requirements to run a hypervisor, having hypervisor capable hardware (including IO-MMU, SLAT, and TPM) present and enabled is key to properly securing the operating system of the future. Microsoft generically refers to this as “Virtualization Based Security”.

Although in theory other hypervisors could enable features integrated with the Microsoft OS, it isn’t clear if Microsoft has made the necessary information public at this point for them to do so. So all of our experience with these features right now is with using the built-in Hyper-V capabilities. We don’t need the full blown hypervisor that can run Virtual Machines, just the portions that support trusted execution and secure partitioning. In the “features” portion the control panel, you would enable this by selecting the Hyper-V Hypervisor without Hyper-V Services.

Selecting the Hyper-V Services in addition to the Hypervisor would enable the running virtual machines, but this is not necessary for the features this article. It is also not necessary that this operating system be running directly on hardware. With nested hypervisor support (such as in the upcoming Server 2016) this OS may be a VM already as long as you enable the second level hypervisor in the VM itself, as shown in Figure 1.

Second-level hypervisor screenshot
Figure 1.

In addition to enabling the hypervisor, you also will  want to configure the features Virtualization Based Security using a local or group policy object, shown below in Figure 2. There are several parts to virtualization based security which may be independently leveraged as appropriate.

Virtualization-based security screenshot
Figure 2.

Device Guard and Secure Boot

Assuming that you already encrypt the disk at rest, a more complete and secure solution must start with coverage the system boot before you can even have a hypervisor or an OS. Newer 64-bit hardware supports UEFI, and UEFI is needed for you to enable Secure Boot as part this protection. When enabled, which is something you do in the UEFI BIOS setup, Secure Boot records information on all the EFI modules and stores them in a secure, password protected way in the BIOS. Once you have the known good BIOS on the system, it will only boot using this version unless someone with physical access and the BIOS password changes it out.

Secure Boot also protects the OS image that is booted. This means an attacker can't boot to an alternate OS to make the changes, and then boot you back to Windows, unless they have physical access and the UEFI password. After booting, Windows 10 detects if Secure Boot is enabled, which then enables additional protections to be implemented from within the OS. You can check the BIOS mode and whether Secure Boot is enabled by using the built in msinfo32 command, as shown below in Figure 3.

msinfo32 command screenshot
Figure 3.

I should mention that there are still some gaps in firmware coverage not handled by UEFI/Secure Boot. This would include firmware such as device bios software (such as on a hard disk controller) that potentially could still be at risk. Most likely an attacker would need physical access to get at these, but hopefully these potential threats will also be addressed someday.

Device Guard and Code Integrity

Once the system is booted, we also want to protect the kernel, drivers, and user mode code and scripts. Microsoft has been slowly advancing the movement towards all drivers being properly signed, and to get a secure system all your drivers must comply. But more than just being signed, Code Integrity enablement allows for a far more complete solution than we have had in the past, covering all kernel and user mode executables.

About 13 years ago I built a Windows XP lockdown system for a vendor which used a hash-based whitelist/blacklist on Windows XP systems to prevent unrecognized executables from running by detecting and terminating them in a Windows service running in the background. This was a really good idea, but one too far ahead of its time. Later, in Windows 7, Microsoft AppLocker which was a much better implementation than mine since it was implemented inside the kernel the OS and therefore better protected itself and was able to intercept at an earlier stage the execution.

AppLocker allows both whitelisting and blacklisting the user mode executable components (support for listing scripts was added later). Whitelisting allows a company to specify exactly which components may run and blocks all others not on the list. Signatures would be matched against a known list good components. Anything that doesn’t match is simply prevented from loading into to be used. The signature checking may use any level a digital signature, or if unsigned, a file hash could be generated. While a file hash is not 100% perfect, the hash generation uses some the same protections used within digital signatures. This makes the effort to reverse engineer the hash an exhaustive trial and error experience—hard enough to discourage all but the most persistent parties from attempting it.

When released, AppLocker did not get much use by customers because most companies were not ready to start to worrying about protecting their systems to this extent. Even if interested, many were so bogged down with getting all their applications to work on Windows 7 that they didn’t have time to add to the workload with the additional effort signature generation. And even then, it was likely that they were not ready to upgrade the Active Directory domains to the required functional level. Some did implement AppLocker, however almost all ended up using blacklisting instead whitelisting. While effective in assisting enforcement and blocking known bad executables, blacklisting is ineffective as a security technique because you cannot block the unknown. Still, even if AppLocker is used in a whitelist mode, being implemented in an insecure kernel really means that (in theory) it is possible for malware to come directly into the kernel and bypass AppLocker. So to really secure the system we need to move to Virtualization Based Security based solutions that protect themselves.

Device Guard Code Integrity runs as a “trustlet” in the isolated kernel, using the hypervisor isolation and protecting the feature from even the kernel itself. Microsoft uses the term “Code Integrity” for the new signature matching with Device Guard. In essence, Code Integrity is very similar to the AppLocker verification, allowing matching against Digital Signatures or a generated file hash. But by being implemented in an isolated environment, the Code Integrity software is much better protected than AppLocker could be. When using the digital signature for matching, you can choose to match at the code certificate level, or any level on up. For example, working with the second level (right under the CA root certificate) would allow a single policy definition to cover anything signed by a given vendor (such as Microsoft). This makes for reasonable security with less work. For unsigned software, you can choose to sign it using your own code signing certificate, or more likely generate a unique file hash for matching. The individual file hashes may now be based on MD5, SHA1, or even SHA256, so they should be more secure than in the past. EXEs, DLLS, and a handful of other binary and/or script extensions may be protected.

Unlike AppLocker, which covered only user mode components, Device Guard Code Integrity covers both kernel and user mode components. When you enable the Code Integrity feature you will also want to set which mode to run in. Running in Audit mode will simply do the checking, but allow executables to run anyway. Those not matching the policy will be silently logged in a new event log that you can scan for missing or unauthorized components. You will always want to start using audit mode until you get a good stable list and procedures for handling the inevitable changes. Then later you enable enforcement mode. You may even run most of your systems in audit mode full-time and only lock down systems that need enhanced security.

Device Guard only supports whitelisting, so to use it in enforcement mode you have to go all in and scan everything into the list. Start the list by scanning your known good built image using a pretty simple PowerShell script. As part of that scan you would specify how you want to handle digital signatures and non-signed components. The output is an XML policy file that you can read, edit, and combine with other scans to produce a single master file of known good components.

For components, outside of the base image, such as applications to be delivered via Configuration Manager or App-V, you’ll want to generate a policy file for just that application. In the case of Configuration Manager, you’ll need the signature to cover both the installer MSI file, as well as all of the executable components it lays down. For App-V packages, you only need a file covering the components it lays down. I’ve even included the generation of the policy file for an App-V package in the latest version of my free AppV_Manage tool. And I’m guessing that other tools will come on the scene to provide application-specific policy files as well.

Once you have these, you merge the multiple policy files into a single file which you then digitally sign with your own signing certificate to protect the whitelist from being tampered with after creation and distribution. While you can include the policy file in the delivered OS image, you should really deliver the file using a new Group Policy Object as it will need to be updated when patching occurs and when new or updated applications are deployed.

And by the way, you can still combine Device Guard whitelisting with AppLocker blacklisting for things like controls on individual users, if desired.

Device Guard and DMA (Direct Access) protection

Enabling DMA (Direct Access) protection support is done as an option in the same Group Policy object for Virtualization Based Security (see Figure 4). This support also depends on the hypervisor feature being enabled. 

Group Policy object for Virtualization Based Security screenshot
Figure 4. (Click to expand)

DMA protection solves another potential avenue attack: pluggable devices that leverage a good known device driver. Even when the device driver itself is protected via Code Integrity, if the device driver uses DMA, this could allow the external device to directly read or modify that it shouldn’t be touching. To protect against this, enabling Secure Boot with DMA protection allows Microsoft to leverage the hypervisor to control the that the external device is allowed to access. This is the same way that hypervisor partitioning is achieved between virtual machines, but does not require virtual machines to be running. You enable this in the policy setting features list the Windows control panel as shown in Figure 4. You only need the Hypervisor itself to be enabled (the Hyper-V Services checkbox would be used to add the ability to run virtual machines). It isn’t clear whether this Device Guard with DMA protection will work with hypervisors other than Microsoft’s built-in Hyper-V, but in theory it could be implemented by other Hypervisors.

Credential Guard

Credential Guard is another feature that depends on the hypervisor and Virtualization Based Security. Although originally an independent feature in the Threshold build of Windows 10, it is now removed from the feature list and set via Group Policy. There are two options when you enable, and enabling with the UEFI lock prevents an intruder from disabling without physical access and the UEFI password. Credential Guard runs as a trustlet in the isolated kernel, using the hypervisor's isolation to protect credential information held by the operating system. In a situation when malware gets into the kernel, this prevents even the OS kernel from gaining access to the credential hashes, preventing “pass the hash” attacks. A good demo how this works is included in this breakout session video from the Ignite Conference.

Windows Defender Advanced Threat Protection

Microsoft has also discussed new subscription options for Windows 10 that are similar to their Office365 offerings. The “E5” version of the Windows 10 subscription (expected to be available sometime this autumn) is slated to include a new feature/product currently called Windows Defender Advanced Thread Protection (ATP).

While enabling Device Guard, Secure Boot, and Code Integrity, and DMA protection gives you a pretty good way to prevent intrusions, you should still assume that you will be breached and be prepared to deal with it. ATP uses telemetry built into the OS to help you understand what happened and where. At a Microsoft Ignite keynote, they demonstrated ATP in a hypothetical scenario, although it appeared that additional support from Office 365 monitoring on Azure and Microsoft Intelligent Security Graph was built into the demo so it wasn’t clear what ATP alone gets you.


While Windows 10 has been out for more than a year, it's only recently that companies have started deploying, or are planning to deploy, the new OS. Those plans might not include build 1607 today, but eventually even the LTSB build will include these capabilities. And while it is too soon to tell how effective these technologies will be, I am convinced that certain customers should today be investigating and preparing to use the new tools. Adding newly available protections on the entire system, from the hardware up, should be the eventual goal, but different needs should dictate different timelines and approaches to implementing.

Those with at-risk data (primarily financial or healthcare-related Personally Identifiable Information (PII) data, but also other security-sensitive data as well) should be working hard on this today. These customers should have a goal of full implementation in under two years, and the sooner the better. These use cases require an enforced “white list” mentality on what code may be run with protections down the stack.

Others probably should not move so fast simply because this is all so new. Enforced white listing may not be practical for the majority of desktops for many years. These customers should be , watching, and playing with the technologies today to enable them to move forward when ready. Adopt practices today in the desktop and application acquisition, prep, and deployment processes that assume eventual implementation will minimize the work later on.

These are steps that all companies can and should take to improve security their systems while running the same applications that they use today.

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

To thoroughly secure the desktop best practice is to remove the drive and provision windows from a server.
Teradichi, Jentu and Atlantis ILIO are a few options that come to mind (i'm not Canadian)