VDI and TS are not more secure than physical desktops, Part 2/5: Centralization helps in other ways

If you read Part 1 of my "VDI and Terminal Server is not more secure than physical desktops" series you may have noticed I left off discussing how VDI/TS doesn't improve data security. So if centralization doesn't help with security, what does it help with?

If you read Part 1 of my "VDI and Terminal Server is not more secure than physical desktops" series you may have noticed I left off discussing how VDI/TS doesn't improve data security. So if centralization doesn't help with security, what does it help with?

If you are able to centralize your data there are several benefits, they are:

  1. Collapse branch infrastructure - If you are successful at deploying VDI/TS at large scale you can probably collapse branch office file/print servers, email servers and maybe even app servers.
  2. Data sharing - If all over your data is in one location, it will be much easier to share data among users without needing to worry about delays transmitting that data over WAN connections or having to worry about replicating data in multiple sites.
  3. Data backup - If you data is located centrally it will be much easier to backup data and configure offsite data backups. If you data was spread over 100 different sites, you would potentially need multiple backup systems and multiple DR strategies.
  4. eDiscovery - If you organization requires eDiscovery for audit purposes, having the data in one place makes this slightly easier. You will still of course need to address eDiscovery on any laptops, smartphones, tablets, etc.  But it does make it a bit easier.
  5. Proactive response to security incidents - If you deploy VDI/TS and all of your desktop operating systems are running in a centralized data center (or regional data centers throughout the world), then patching those Windows instances is able to be done more rapidly, distributing A/V signatures, HIPS agent updates, etc can be more rapidly accomplished than if those assets were spread over WAN links or frequently disconnected from the network as in the case of laptops.

The problem is, data centralization is really tough to achieve these days…

In the first part of this article, I said I've been working with this technology for 20 years. Twenty years ago, few people had personal computers at home. Even fewer had any form of hooking those computers up to other computers. I've been around a long time and had multiple different models of modems all the way from 300/1200 baud up through 56k baud modems before I moved into ISDN/DSL/Cable, etc as the Internet started ramping up. Back in early 90's there was very little exchange of files between people.  Most data was exchanged on floppy disks, there was no Internet at that time and the only public exchange mechanisms that existed were BBS's, CompuServe, AOL, Prodigy, etc. The threat of viruses/trojans were minimal. Obviously the Internet changed that. The Internet changed it fundamentally in two ways:

  1. It was much easier to share data with people (especially sharing data [read: malware] with people who should be smart enough to know that they shouldn't be opening your attachment.
  2. An always online state for computers.

Since the advent of the Internet, most computers are always connected.  Unsolicited emails come by the thousands. Web site drive by downloads are commonplace. But these things are only half of the data security problem that we're talking about. The other issue is loss of control of data. The rise of web/cloud technologies like cheap email (Gmail, hotmail, etc), SaaS-style applications like DropBox, Box, SpiderOak, Skydrive, SugarSync, etc means that it's trivial for a user to get data outside of your organization and into locations where you can't possibly protect it, much less audit its use. The rise of smartphones and tablets means that your end users are going to want to have access to their data when and where they want it. Whether you think you can control their use of data or not, chances are you will fail at this.

It's really a matter of trust...

Trust is a term that is tossed around the technology world every day. Do you trust this EXE to run on your computer? Do you trust this website to have more privileged rights on your PC? Do you trust this Word document I'm emailing to your computer? It also extends beyond the desktop that we try to secure. Do you trust your users to not take company data off company computers? Do you trust employees to use best practices to secure their home PCs that you provide remote access from? Do you trust your A/V vendor is keeping up with the latest threats? Do you trust your banking institution is doing everything possible to protect your financial information?  Do you trust Apple, Google, Amazon, etc. with your credit card information (for App Store purchases as well as NFC implementations), your email security, your browsing experience?

The problem is the trust model is broken. It's not broken a little, it's broken a lot. The entire SSL/CA infrastructure is flawed and has already been exploited multiple times.  The simple reality is that we can't rely only on Anti-virus companies or security vendors to install software that will try to intercept bad software before it can cause damage. If we take this approach, it's already too late. Two factor authentication is a really good security practice that can improve the probability that the person using an operating system or website is in fact the real user. Well that's true as long as we can be sure that our two factor authentication solution hasn't been compromised *cough* RSA *cough*. Again, it's all about trust. If we trust our two factor vendor, then we make an assumption that this two factor vendor has security practices in place to prevent the two factor security solution from becoming compromised. If that's not that case, then we've placed too much trust.

By the way, I want to make absolutely clear so that no one thinks I'm picking specifically on RSA or Windows here. Windows has had it's share of security issues over the years, but Apple OS X, Linux and other operating systems are not flawless either. They have their own security faults and incidents. The reason why Windows is such an attractive attack target is because is had 90% market penetration. If you are an exploit writer and you want to be able to compromise a remote company, of course you're going to write an exploit for the operating system they are most likely to be running.  As Apple's popularity increases over the years and as smartphones become the dominate access device I'm sure you'll see tons of OSX, iOS and Android exploits become the norm going forward.

So if we can't trust anyone, what do we do?

Stop using the Internet.

All joking aside, this would fix the trust problem.  If you never opened an email, never opened an attachment or never browsed and website and turned off your network connection you'd probably be good.  Since most people are probably rolling their eyes at this point because they recognize that this isn't practical, we need to start discussing ways that we could potentially reduce this risk. Notice I say reduce and not eliminate because I think information security is all about providing the least probable attack surface. You'll never completely eliminate security risk. Where there's a will there's a way.

Stay tuned for Part 3 where I'll talk about mitigation strategies for data security...

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Greetings Shawn,

1400 views and not one comment !

I thought I better come along and say hello in case you were getting discouraged ;)

I liked part one, it was a very refreshing take on the way we look at data that I had not seen much before, but this second part is a wandering a bit, am hoping it picks up again in part three !

I did want to say that although you have written hardly anything I can disagree with, I do think that you are asking the wrong questions and approaching this from the wrong direction.

VDI/TS is not more secure than traditional desktops ?

I disagree, but thats besides the point.

Who cares even if you do agree with that statement, the world is moving away from the Microsoft desktop monopoly and a whole industry is working towards setting the desktop free.

A better question would be to ask how can we make VDI more secure than it already is, I would really like to see you apply your extensive experience at cracking that one.

Am not entirely sure why you think traditional PCs are more secure than VDI because your first two parts have not covered that yet, so I will talk about something else for now.

You did actually solve the problem the entire problem of cyber-security with your last statement when you said "STOP USING THE INTERNET".

That DOES fix the trust problem and its the way a number of large organizations such as my customer the National Nuclear Security Administration (NNSA) is fighting the problem of cyber-attack right now.

The key is not inventing yet more magic tech (ahem Bromium), but doing something very simple using tools we already have.

Quite literally turn off the internet on your internal desktop infrastructure and provide your employees with a second desktop (ideally non-persistent) for all of those internet facing activities.

This second desktop platform needs to be physically separate from your own infrastructure, so that when it is breached the attack occurs as far away from your internal networks as possible.

By forcing users to only access the open internet on a second desktop, hosted on a platform built to handle the risk, you are able to significantly reduce the attack surface of your organisation.

This is the model being embraced by the people who protect our nuclear weapons and it aligns perfectly with the goal of information security being able to provide the least probable attack surface.

VDI does that, traditonal desktops cannot.

We need to start thinking outside of the box in order to arrive at solutions like this, ones that really address the root cause (user behaviour) and develop ways to segregate this onto platforms best built to handle the risk.

The solution I describe above was not dreamt up by information technology professionals such as yourself with decades of experience in the IT space, it was formulated and executed successfully on a very large scale by cyber-security professionals who understand the mitigation of risk.

Lets ask the right questions instead of the wrong ones from now on, thats part of this process we call security too.

I really like your article Shawn and I really look forward to the next one but I question the relevance of a comparison between TS/VDI and traditional desktops against the backdrop of serious cyber-attacks.

The growing number of attacks on our cyber networks has become, in President Obama's words, "one of the most serious economic and national security threats our nation faces."

Its serious, so why are we navel gazing and comparing apples to pears ?


I believe there is are a couple of dimensions to TS/VDI which makes it's centralised nature more secure. It is certainly easier to attach a physical 'listening' device such as a keyboard logger or network tap to a desktop PC 'out in the wild' than it is to compromise a centralised, or virtualised desktop in the same way.  This kind of threat has been responsible for several multi million dollar frauds which have occurred at major banks in the last 10 years. In addition, physical access to a desktop provides a headache for data disposal, as distribution of data across hundreds of corporate PC's and their internal disks has been used many times to steal data and use it to nefarious advantage, even in government environments. I would also argue that it is easier to encalve and baseline a centrally hosted desktop to the point where usage 'out of the ordinary' can more easily be spotted and dealt with!!

I'm not disagreeing with you, your article is well measured and gives pelnty of food for thought, but there are many arenas where the centralised model provides distinct security advantages

I think this certainly provides more weight to the argument that any IT solution should be designed and implemented based on specific corporate requirements and not on anecdotal evidence that one solution is more secure/scalable/cheaper/manageable than another. One mans secure/centralised solution might be anothers management nightmare!


I'll have to wait until part 5 is posted before going into detail on the points made here as it appears there's much left yet to cover.

That being said, Part 1 was a complete miss due to the utter lack of understanding of what data truly is and the forms it takes.  If we cannot use industry standard terms when speaking to information assurance in general, then the discussion has no place going in that direction in the first place.

This blog is a respected interest of many.  Choosing to omit rather than speak to defined terms leads me to believe that there can be no objective conclusion drawn from what I have and will read.  I will of course still lend an opinion in the hopes that someone will one day read it and be able to properly draw their own conclusion whether it be to the same or contrary.