How persistence affects security: VDI and TS are not more secure than physical desktops, Part 5 of 5

If you read the last part of my "VDI and TS are not more secure than physical desktops" series, I left off discussing the various ways of performing isolation and how each approach works. I spoke about some of the challenges with each of these isolation approaches, particularly around usability.

If you read the last part of my "VDI and TS are not more secure than physical desktops" series, I left off discussing the various ways of performing isolation and how each approach works. I spoke about some of the challenges with each of these isolation approaches, particularly around usability. To continue this conversation, it's important to discuss persistent vs non-persistent operating system instances. If you missed the first parts and need to catch up, check out the previous articles:

Before we dig into the details into the security ramifications of Persistent vs Non-Persistent VDI or Terminal Server, I think it's important to spend a bit of time describing what the difference is between Persistent and Non-Persistent operating systems. Keep in mind that the ideas behind persistent vs non-persistent applies equally well to VDI and Terminal Services and can even apply to physical desktops.

Persistent Desktops

Persistent Desktops are desktop operating systems whereby the file system of the computer (the C: drive) is configured in a fully read/write supported model. This does not necessarily imply that the user of this system is an administrator, and in fact they should not be. Rather the model implies that the contents of the file system and the registry are persistent between reboots of the operating system.

There are several benefits of having a fully persistent disk model:

  • If you need to install operating hot fixes, you deploy them using a tool like SCCM, LanDesk, Altiris, etc and the hot fixes install on the OS. When the OS is rebooted, those operating system hot fixes are still in place. If you A/V agent needs to update it's signature files (aka DAT files) it does so throughout the day and upon rebooting the PC, the most current DAT files are in place.
  • If you need to have unique pieces of software on a per user or per working team basis, persistent disk images help. For example, if only 5 people in my organization need Adobe Creative Suite then I can push the installation package of Adobe Creative Suite to their 5 desktops (whether physical or virtual) and the software will exist only on the C: drive of their computers and not on any other system. Proponents of non-persistent images would argue that this could easily be accomplished through the use of Application Virtualization and/or OS Layering technologies. I would agree that this is true sometimes.  However, Application Virtualization / Layering both have performance overhead and they won't necessarily work for all types of software (depending on the solution) and therefore they are not a 100% solution. Pushing software for direct installation is still the most broadly acceptable solution to support all needs.
  • If you need to stagger deployments of software it's pretty easy to update machines within dynamic collections of systems and slowly complete the rollout of a new piece of software.

There are also several disadvantages to having a fully persistent disk model:

  • Since each system has it's own persistent copy of it's C: drive contents, you can have a massive disk footprint within your data center. Think about it, if each VDI desktop is a 50GB disk image and you have hundreds or thousands of these, then you will require 10s or 100s of Terabytes of storage to support your VDI project on persistent disk images.
  • As each system is built up of multiple software installs followed by uninstalls follows by new installs in various different patterns you can run into situations where individual desktops experience problems with failed software installations or failures in removing software cleanly.
  • Deploying new software updates requires the software to be packaged and pushed to each machine individually. Depending on whether the machines are powered on when the distribution is to occur and various issues related to the health of the machines could result in less than 100% success in the rollouts.

Non-persistent Desktops:

Non-persistent desktops are desktop operating systems whereby the file system of the computer (the C: drive) is configured in a way such that writes to the file system are not preserved. This can be accomplished in a variety of ways, but generally speaking there is either a write filter within the VM that redirects writes to an alternate location where they are not preserved upon reboot, or this is done at the hypervisor level by creating a snapshot/clone/differential disk whereby the changes are discarded upon reboot. Non-persistent desktops are often associated with a term called Common Image where multiple non-persistent desktops are leveraging the exact same image file. In this circumstance, all desktops built from this template have the exact same file system contents and upon reboot they all revert back to this clean slate.

There are a few benefits of having a non-persistent disk model:

  • Since each system reverts to clean slate each time it's powered on, you never have to worry about an end user breaking an application by deleting important files or removing important registry keys. Even things as simple as a user reconfiguring an application can be reset back to the application's proper configuration with a simple reboot.
  • Since each system is leveraging the same disk image as it's starting point, you can save substantial amounts of storage space. In the persistent model above we talked about 10s or 100s of Terabytes of storage. In the event you had 1,000 desktops sharing the same non-persistent disk image, then you would only have 50GB of storage on the back end to support those 1,000 desktops. You can achieve massive storage space savings as well as IOPS reduction since the blocks of data that make up this disk image can be cached by the hypervisor or storage subsystem.

There are also several disadvantages of the non-persistent disk image model:

  • If you deploy all desktops off of a common image, then all of those systems need to contain the exact same set of applications. You could theoretically include additional software and control access to the various executables using a Software Restriction Policy or AppLocker, but your software vendors may decide that the mere installation of the software in the disk image constitutes license consumption. If this is the case and you can't handle the one-off application needs via Application Virtualization or other technologies, then you will be forced to create multiple different images per permutation/combination of unique applications. In low complexity environments, it's very likely you can support 10s or 100s of users with 1-10 unique images. When you try to scale a solution like this to environments that contain thousands or tens of thousands of desktops and hundreds or thousands of applications, it will be extremely difficult to support these systems without having tens to hundreds of different unique desktop images that you need to maintain.
  • Since non-persistent disk images revert to a clean slate with each reboot you'll need to be careful with respect to how you manage operating system hot fixes and anti-virus updates. If you revert to a two month old image, the OS will boot up with out of date OS hot fixes and A/V signatures. You may have processes to update the DAT files, but that won't work well for A/V software updates or OS hot fixes since those will typically require an OS reboot to apply. If your organization has audit requirements to report back on OS hot fixes an A/V updates you will be constantly updating your disk images to keep them currently. If you have many disk images to maintain this can be quite cumbersome.
  • Since non-persistent disk images revert to a clean slate with each reboot, you'll also have to be concerned with any type of user-based preferences or user installed apps that will be lost each time the system reboots. There are methods to preserve some of this content, but those methods don't apply to a pure non-persistent desktop, but rather creates a new category of systems that are a hybrid persistent / non-persistent desktop.

NOTE:  You don't need to implement VDI or Terminal Server to achieve non-persistent desktops. You can easily support non-persistent desktops by using a local Hypervisor with snapshot revert capabilities or even on a physical PC with a write-filter software solution or system restore solution like Deep Freeze.

Hybrid Persistent / Non-Persistent Desktops:

We've previously talked about the benefits of both persistent desktops and non-persistent desktops. Some of the weaknesses of non-persistent desktops can be reduced by implementing a limited form of persistence within the model.

Here's a few examples of how you can achieve a hybrid approach:

  • Implement a profile management solution that allows you to backup and restore user settings for Microsoft Office, IE and other third party apps. This allows you to keep the benefits of a clean slate desktop while still preserving some basic application customizations to minimize user frustration.
  • Implement a disk layering solution to provide the ability to support user installed applications. If the user wants to install their own copy of iTunes, Google Chrome, etc. these can all be supported by leveraging a disk layering solution because the user installed components get installed into a differential disk that is layered on top of the common disk image. Some would argue that this level of persistence destroys the benefits of single common image non-persistent images, but that isn't necessarily the case.  You can still achieve storage savings, IOPS reduction and a common set of OS files that will make it easier to update the underlying OS across multiple systems.

Security in the Persistent vs Non-Persistent vs Hybrid world:

Many proponents of the non-persistent disk image model insist that it is a superior solution from an information security perspective because of the idea that if the operating system is exploited and ownership is established, the machine can be recovered simply by rebooting and the OS would be clean again. While I do agree with this statement, I think it places a thin veil of ignorance around the concept of what is truly secure.

Let's analyze this argument in a few ways:

  1. Is it better than a persistent disk that would keep the malware / infection over a long term period?

    Yes, absolutely.

  2. Does it mean you're safe?

    No. If a hacker compromises your system even for 8 hours your data can already be stolen in that time. While giving the hacker 24 months is certainly going to make it easier, make no mistake that the data could be siphoned in much shorter duration than that.

  3. Is resetting a system an effective means of dealing with virus/malware/trojans?

    No. There's plenty of people out there who have said "You don't need A/V software on this thin client OS because you just reboot it and the infection is gone." Tell that to someone who has suffered from an Internet worm like Nimda, SQL Slammer, Nachi, etc. By the time that you have rebooted the system, it is already reinfected by another node on the network. There are no arguments from me that having a non-persistent OS does in fact make it easier to recover, but I'm not willing to go as far as to say it's the right way to deal with things. In addition, when we talk about the hybrid model where we are persisting some forms of application data in order to balance clean slate reboots with usability we introduce possible attack vectors where allow re-infection. Take for example, Office document templates.  If you were to backup/restore Office document templates to provide higher usability, you create a location where malware can infect a clean system upon reboot as it reads it's normal.dot in from the restored template folder.  The hybrid model does provide a better security benefit over the persistent model as there will inevitably be less locations of persistence, but if there is any persistence at all then we open up holes.

  4. Will I get clean desktops every single day with the non-persistent model?

    That depends. Some organizations are very good about forcing users to log off daily. Other organizations allow users to disconnect from their desktop and leave applications in a running state. I've worked in some customer environments where a user remained logged into their virtual desktop for nearly four weeks. If you don't have tight controls over user's getting out of the environment then those non-persistent desktops are effectively persistent for weeks on end. Again, while this is still better than a desktop that is always persistent, it's a far cry from the "new PC every day" that vendors often tout.

In summary, there are many different technologies that can be used to improve desktop security. Hopefully this series has helped you understand what the different options are and what some of the pros and cons of each approach are. While I personally love the concept of non-persistent desktops it's also something that is incredibly difficult to achieve within large organizations where I spend most of my consulting time. Also, non-persistent desktops only solve part of the problem and really doesn't address the core problem of trust and protection from malicious code. When the Internet browser and/or email client can be run offsite, it provides a nice level of isolation and abstraction for an organization but it also introduces it's own set of usability issues.

Today there is no perfect solution.  There's a bunch of different options. You need to look at all of the options and balance them against your organization's level of risk acceptance and usability impact to determine what the right solution is for you. Nothing is guaranteed to be secure. All we can do is strive to make it more difficult for the hackers. At the end of the day, time is money. The more difficult it is to hack something the less inclined a hacker will be to spend the time to do so (unless you are a REALLY attractive target - then you need to rely on your information security practices to help you). Good luck…you'll need it!

Join the conversation

7 comments

Send me notifications when other members comment.

Please create a username to comment.

Deep Freeze...right! I can't believe I forgot to put that in my "Doing it without VDI" session. I use that around here and I love it.


Cancel

In the second to last paragraph you wrote "While I personally love the concept of non-persistent desktops it's also something that is incredibly different to achieve within large organizations".


Surely that should be "incredibly DIFFICULT" instead?


Thanks for writing this series of articles!


Cancel

@Helge - ACK!  I just fixed one other typo and missed that one. Fixed now.


Cancel

@Shawn


Great series, but after reading the articles I still don't know your points that prove "VDI and TS are not more secure than physical desktops"


VDI and TS are management technologies that are architectured completely differently than Traditional Desktops which can introduce security benefits or even problems as well. Remember, SCCM, LanDesk, Altiris, etc don't make you more secure either.


RDSH, SHVD, CHVD, and Traditional Desktops are not security solutions, they are very unique and distinct client architectures and can be managed and secured very differently.


I believe this "which is more secure" argument originated when people were claiming that VDI is inherently more secure than Traditional Desktops. Then people ran off with the notion claiming that VDI is more secure than Traditional Desktops which is wrong.


I also think people misunderstood what "secure" meant. Security is an illusion and just because you may be more "secure" in one solution doesn't mean you're safe.


Personally, I never claimed a solution is more secure, but I did say that VDI is provides more security out of the box for many reasons that many people have constantly outlined. Once you virtualize something it makes it easier to consume and sometimes manage. Control and security are attributes of better management.


It all boils down to use cases and how your desktops are managed. The better they are managed, the more secure you are. If you have a large organization with complex requirements, then you start off out of the gates with a very tough and insecure business to protect. VDI may not be a good candidate, but since you are a big org chances are you have invested heavily in the current Desktop Management technologies anyway.


Cancel

I must add that in addition to the management elements of RDSH, SHVD, CHVD, and Traditiona PCs, the security vendor ecosystem around each solution determines how truely "secure" they are.


You really could have just created one article with a tilte called "Why Traditional PCs will be more secure than VDI and TS." and linking to Bromium's website.


I wonder if Cisco will do anything neat with Virtuata to help the security industry of SHVD, CHVD, and maybe even RDSH.


Cancel

@Icelus, oh jesus, WTF with all this Bromium group think. It's got so much to prove.


Security as this series has pointed out is applied at many levels. Modern data exploits go after all of them and sit there while they figure out the next. If you work with any big bank or government which I think you do, you will also know that the amount of money invested to exploit these is plenty. There will be no silver bullet. It may help, and let's face it Bromium would be all over trashing VDI if their stuff worked on a virtual instance in the data center. I'm sure that is a hardware limit, but not my point.


Cancel

I enjoyed this series because it brings to light perceptions and misunderstandings of VDI as a solution.  It also gets the community talking about security or information assurance.  I'm all for that and think it's a great thing.


My opinion however has not been swayed.  Many of the pros and cons mentioned throughout this series never speak to to risk in general.  More specifically, risk likelihood. Nor do they look at threat mitigation.  


The series fails to even highlight incident response in a virtual world vs. a PC world or the resources required to properly implement them.  Instead much of what has been talked about seems to boil down to one thing:


You can still be hacked in a VDi world just as you can in a PC world.


While I do not disagree with that statement, unfortunately this is not how you should look at the two from  risk and information assurance standpoint.  I feel the author of this series fails to understand Confidentiality, Integrity, and Availability (CIA) when looking at data.  In fact, information assurance of data was never looked at properly in the first place (citing Part 1 here).  


I again make mention of my original comment.  If you do not understand the 3 types of data, then you cannot properly understand how to protect data.  


Everything said, I don't want people to think I'm preaching VDi here as the most secure environment for everyone.  It's not a silver bullet as some may think.  Each organization has it's own risks and challenges to mitigating those risks.  VDI can be more secure than PC's.  Whether or not you make it more secure begins and ends with understanding your risk.  Talk about non-persistent clones, malware, and viruses is only talk.  Where the rubber meets the roads is in attack vectors and strengthening/hardening your posture around 'data' from a CIA point of view.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close