Security by isolation methods: VDI and TS are not more secure than physical desktops, Part 4 of 5

If you read the last part of my "VDI and TS are not more secure than physical desktops" series, I left off discussing security by isolation. I promised that I'd discuss the different methods of security by isolation and how they help and hinder our two primary security issues: the Internet browser and the email client.

If you read the last part of my "VDI and TS are not more secure than physical desktops" series, I left off discussing security by isolation. I promised that I'd discuss the different methods of security by isolation and how they help and hinder our two primary security issues: the Internet browser and the email client. If you missed it and need to catch up, check out the previous articles:

Matryoshka Dolls (aka Inception)

You may be wondering why I'm talking about Matryoshka dolls in a conversation about desktop security? Well, everyone knows what these dolls are.  They are those cute painted wooden dolls that nest inside each other until you get to the very smallest doll inside. You can think about desktop security in the context of isolation in the same way. Isolation can be done at many levels and therefore there are potentially many isolations nested inside each other. So given that, let's talk about some of the ways in which security isolation can be provided to solve our issue of the security issues we face with the Internet browser and our email client.

There are four primary ways that security isolation can be provided. They are sandboxing, Microkernel virtualization, OS Virutalization, and Offsite OS Virtualization.


Sandboxing is a technology whereby the software developer creates an isolation container where untrusted documents / attachments can open. This isolated container is often a limited subset of what the rest of the application provides.  This is done on purpose to limit the number of lines of code (or attack surface) that a malicious piece of code can exploit. Essentially the sandbox is assumed to contain malicious code and therefore it needs to be designed as restrictive as possible in terms of what code runs there.

Often the sandbox will also implement a policy engine of sorts that indicates what types of resources a sandboxed document at a particular level of trust should be allowed to access. Sometimes this is a restriction on operating system resources and sometimes it's implemented as an OS API hook to restrict or redirect API calls that are made by the sandbox in order to limit the damage that could occur as a result of malicious code that would otherwise infect the machine.

Sandboxing isn't perfect technology. It does, however, go a long way to provide a higher level of security. It is much less likely that the sandbox code will have as many vulnerabilities as the main application. Even though sandboxing does improve security, they are not flawless. There are ways to escape the sandbox and several hackers have done this on a variety of sandboxed applications.

Some examples of modern apps leveraging sandboxing are Google Chrome / ChromiumAdobe Acrobat 10.1+ Protected View, and Office 2010 Protected View. If we apply this concept to our two primary security issues of the Internet Browser and email attachments, we find that can provide some improved security, but since the sandboxes still run on top of an exploitable operating system we are only as protected as the sandbox can provide.  Once something escapes the sandbox, the regular rules apply.


Microkernel virtualization is a solution that leverages a hypervisor technology to create isolation environments in which application code can run completely isolated from the rest of the operating system. The hypervisor technology leverages hardware virtualization support from Intel and/or AMD and therefore provides lower level access controls than a typical software level virtual machine implementation or sandboxing. If done correctly, this means that the isolated code will have no way to escape the isolation environment unless it somehow compromises the hypervisor code itself.

A modern day example of microkernel virtualization is Bromium. Given that the microkernel code is much leaner than the entire monolithic operating system and given that the hardware assisted virtualization provides a hardware enforced isolation, it's much less likely that this solution will be compromised unless one of three things happen:

  1. Someone discovers a weakness in the product's trust model. Like all products Bromium has a model that allows specific "trusted sites" to bypass the Bromium microkernel virtualization. If someone is able to compromise that trust model, then it's game over. This is the most probable attack vector because it can be done outside of the Bromium code weaknesses (i.e. within the host Windows OS). I'm unsure what specific protection mechanisms Bromium plans to leverage to prevent such an attack and I have not had the time to pursue how vulnerable the product is against this specific attack in my testing.
  2. Someone discovers a weakness in the microkernel virtualization stack.  This is the second most probable attack vector given that this software product is entirely new code and could of course have vulnerabilities itself.
  3. Someone discovers a weakness in the hardware virtualization stack provided by Intel/AMD. This is the biggest risk not only to products like Bromium, but to all virtualization solutions and even to regular run-of-the-mill operating systems. This means that there is some vulnerability within the Intel-VT or AMD-V code itself that allows for either local privilege elevation and/or guest-to-host hypervisor escape. This sounds like a far-fetched reality, but in June a vulnerability (within Operating Systems and Virtualization stacks that run on Intel CPUs) was discovered and reported by Rafal Wojtczuk who currently works for Bromium. Details of this vulnerability are being presented at the Blackhat Security conference in Las Vegas this week.

If we apply micro-kernel virtualization to our original challenge with protecting the operating system from the Internet browser and email attachments we find that we can provide a much higher level of isolation that is far less likely to be compromised because of the added isolation provided by the hardware virtualization assist. That's not to say that this implementation model is impervious to an attack, but it provides a much better level of protection vs standard sandboxing.

OS Virtualization (or monolithic kernel isolation):

OS Virtualization or monolithic kernel isolation is a technology that should be familiar to everyone. It's your basic hypervisor implementation to completely isolate a running instance of Windows, Linux, etc. from other running instances of the Operating System. It covers both Type-1 (bare metal hypervisor) and Type-2 (host-based hypervisor). The basic premise of OS virtualization is that you run a completely separate copy of an operating system on the same piece of hardware. If we apply this concept to our original challenge with protecting the operating system from the Internet browser and email attachments we find that we could run out Internet Browser and our Email client within a completely separate instance of the operating system. If we do this, then that running instance of the operating system could be compromised without impacting our parent/host operating system.  

Think about this for a moment. You can take a copy of Windows 7 and place it on your host hardware, then load up your favorite hypervisor and install another copy of Windows 7 inside the hypervisor. Now, perform all Internet Browsing and Email client attachment work inside this additional copy of Windows. If this additional copy of Windows is compromised, it will not affect your host operating system instance (assuming the exploitation isn't a guest-to-host escape as we talked about above. In addition, you could leverage snapshot/revert technology or write-filter VMs to ensure that every time this VM boots on your machine it is booting to a clean operating system.  

This does not eliminate the chance that this VM will get compromised, it only means that if it does get compromised, you can power it off and power it back on and the malware code will be gone. Of course, if you continue to visit the same sites and open the same documents that caused the infection, then you will be continually "owned" every day that you use the VM.  There are some weaknesses with this approach that you should be aware of as well:

  • Additional host PC performance impact - Any time you run a hypervisor with another copy of the operating system on it you are going to have CPU/RAM/DISK performance impact. You will want to have Dual/Quad core CPUs, at least 4 GB of RAM and preferably SSD drives to make performance acceptable.
  • Data interoperability - In the event that the attachments you've downloaded via email or the data that you're seeing on the Internet needs to used within the host operating system you will need to find a way to get that data out of the "insecure" VM and into the "secure" host OS. There are many ways to exchange data between host OS and guest VM, but the nature of doing this compromises the very model of security you were hoping to achieve by implementing OS virtualization for insecure resources in the first place.  You may end up compromising the secure host OS by moving data from the insecure OS into it.
  • User Preference retention - While the email client may not be a big deal when it comes to retaining preferences, the Internet browser will be a bigger challenge. If you leverage the VM snapshot/revert technology or write-filtered disks then each time you power on the "insecure" browser OS you will lose Internet history, cookies, bookmarks/favorites, browser add-ons, etc. For most people, this is exactly what you want to have and it's the purpose of providing the isolated operating system.  Still for the average user, this isn't very friendly and they will not appreciate the inflexibility provided by this approach. To combat this, you can implement a variety of technologies that allow you to persist specific pieces of data, but any amount of data persistence provides one more vehicle for a compromise to survive the "clean reboot" approach we are seeking.

Offsite OS Virtualization:

Offsite OS virtualization implies the exact same principles we outlined above with respect to leveraging a hypervisor solution of some sort to provide an insecure Internet browser and/or email client. The key difference with the offsite solution is that the running virtualized operating system is provided physically offsite from your hardware. This offsite location could be as simple as a separate network VLAN that has a security perimeter (or a DMZ) or it could be entirely offsite from your premise in a third party Cloud/DaaS provider.

A few examples of such a solution might be Desktone or Tucloud (both of which are DaaS solutions) or for a browser-only solution could be something like Authentic8. These solutions have one major distinct advantage and that is if the browser or email client is compromised, the vulnerable code is not running within your data center or on your local PCs. That code is executing in someone else's datacenter and therefore doesn't have an easy way of finding it's way to your important data. These solutions are designed to work over a display remoting protocol and therefore may have no direct network route back into your data center or PCs.

While these solutions provide a good level of isolation, there are a few things that you should be aware of as potential usability issues with these approaches:

  • User experience with display remoting protocol - Generally speaking display remoting protocols have improved a ton over the last few years and as long as you have decent internet bandwidth (2Mbit+) and low latency (<100ms) you should generally have a good user experience. However, if you are watching a large amount of fast moving content such as Flash, HD videos, etc. the experience may be less desirable than if that content were running locally.  This should not deter you from these solutions though because there are many ways to address these issues. Speak to the vendors about your needs and try it out yourself to see if it works for you.
  • Data interoperability - Just as above, if we are leveraging this offsite hypervisor as a way of providing access to an insure VM, this isolation becomes a risk should be need data exchange between our host operating system and our remote insecure OS instance.
  • Application Interoperability - Often times in large organizations there will be line of business apps that need to interoperate with an email client or web browser. While there is absolutely nothing wrong with providing this integrated into the "secure" browser on their host PC, you may need to implement an additional piece of technology in order to control which websites can be accessed by the onsite "secure" browser vs the offsite "insecure" browser. The last thing you want is someone reusing a local secure browser to visit an insecure site. There are many ways to accomplish this and it's outside of the scope of this article.

Stay tuned for Part 5 of this article where I'll discuss the facts and myths around Persistent vs Non-persistent desktops.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Thought I would just clarify that write filters don't necessarily required OS virtualisation. And can be executed on traditional physical PCs. [Common for internet cafes/libraries/labs etc]


Great series of articles Sean.

I can't help but wonder whether the complexity that we engineer into software these days which requires the development of additional software products such as that which Bromium are developing, highlights a key failing in our fundamental approach to security.

More complexity at the OS/App layers requires more complexity at the 'prevention' layer, this compound spiral of complexity leads many of us who dont understand the complexity to trust the products at the 'prevention' layer implicity, as we have no choice but to do so. This trust leads to a commonly blase acceptance that we are protected. So, should we trust, or should we be paranoid or simply just not care!

I think the Flame virus highlighted to me that it is possible for those with detailed knowledge to silently intercept and forward data which can be used quietly to achieve no end of malevolence. The threat which no one is aware of is the greatest threat!!

I have said for years that Windows has become a huge monstrous and untamed beast which requires constant vigilance to keep watertight and secure, I'm sure Linux and Mac have some similar shortcomings. We really need to break down the monolithic OS, seperate it into trusted blocks of code and only call those functions which are necessary when we need them. All of these security approaches are merely 'band-aids' which temporarily address a flawed approach to computing.