VDI and TS are not more secure than physical desktops, Part 2/5: Centralization helps in other ways

If you read Part 1 of my "VDI and Terminal Server is not more secure than physical desktops" series you may have noticed I left off discussing how VDI/TS doesn't improve data security. So if centralization doesn't help with security, what does it help with?

If you read Part 1 of my "VDI and Terminal Server is not more secure than physical desktops" series you may have noticed I left off discussing how VDI/TS doesn't improve data security. So if centralization doesn't help with security, what does it help with?

If you are able to centralize your data there are several benefits, they are:

  1. Collapse branch infrastructure - If you are successful at deploying VDI/TS at large scale you can probably collapse branch office file/print servers, email servers and maybe even app servers.
  2. Data sharing - If all over your data is in one location, it will be much easier to share data among users without needing to worry about delays transmitting that data over WAN connections or having to worry about replicating data in multiple sites.
  3. Data backup - If you data is located centrally it will be much easier to backup data and configure offsite data backups. If you data was spread over 100 different sites, you would potentially need multiple backup systems and multiple DR strategies.
  4. eDiscovery - If you organization requires eDiscovery for audit purposes, having the data in one place makes this slightly easier. You will still of course need to address eDiscovery on any laptops, smartphones, tablets, etc.  But it does make it a bit easier.
  5. Proactive response to security incidents - If you deploy VDI/TS and all of your desktop operating systems are running in a centralized data center (or regional data centers throughout the world), then patching those Windows instances is able to be done more rapidly, distributing A/V signatures, HIPS agent updates, etc can be more rapidly accomplished than if those assets were spread over WAN links or frequently disconnected from the network as in the case of laptops.

The problem is, data centralization is really tough to achieve these days…

In the first part of this article, I said I've been working with this technology for 20 years. Twenty years ago, few people had personal computers at home. Even fewer had any form of hooking those computers up to other computers. I've been around a long time and had multiple different models of modems all the way from 300/1200 baud up through 56k baud modems before I moved into ISDN/DSL/Cable, etc as the Internet started ramping up. Back in early 90's there was very little exchange of files between people.  Most data was exchanged on floppy disks, there was no Internet at that time and the only public exchange mechanisms that existed were BBS's, CompuServe, AOL, Prodigy, etc. The threat of viruses/trojans were minimal. Obviously the Internet changed that. The Internet changed it fundamentally in two ways:

  1. It was much easier to share data with people (especially sharing data [read: malware] with people who should be smart enough to know that they shouldn't be opening your attachment.
  2. An always online state for computers.

Since the advent of the Internet, most computers are always connected.  Unsolicited emails come by the thousands. Web site drive by downloads are commonplace. But these things are only half of the data security problem that we're talking about. The other issue is loss of control of data. The rise of web/cloud technologies like cheap email (Gmail, hotmail, etc), SaaS-style applications like DropBox, Box, SpiderOak, Skydrive, SugarSync, etc means that it's trivial for a user to get data outside of your organization and into locations where you can't possibly protect it, much less audit its use. The rise of smartphones and tablets means that your end users are going to want to have access to their data when and where they want it. Whether you think you can control their use of data or not, chances are you will fail at this.

It's really a matter of trust...

Trust is a term that is tossed around the technology world every day. Do you trust this EXE to run on your computer? Do you trust this website to have more privileged rights on your PC? Do you trust this Word document I'm emailing to your computer? It also extends beyond the desktop that we try to secure. Do you trust your users to not take company data off company computers? Do you trust employees to use best practices to secure their home PCs that you provide remote access from? Do you trust your A/V vendor is keeping up with the latest threats? Do you trust your banking institution is doing everything possible to protect your financial information?  Do you trust Apple, Google, Amazon, etc. with your credit card information (for App Store purchases as well as NFC implementations), your email security, your browsing experience?

The problem is the trust model is broken. It's not broken a little, it's broken a lot. The entire SSL/CA infrastructure is flawed and has already been exploited multiple times.  The simple reality is that we can't rely only on Anti-virus companies or security vendors to install software that will try to intercept bad software before it can cause damage. If we take this approach, it's already too late. Two factor authentication is a really good security practice that can improve the probability that the person using an operating system or website is in fact the real user. Well that's true as long as we can be sure that our two factor authentication solution hasn't been compromised *cough* RSA *cough*. Again, it's all about trust. If we trust our two factor vendor, then we make an assumption that this two factor vendor has security practices in place to prevent the two factor security solution from becoming compromised. If that's not that case, then we've placed too much trust.

By the way, I want to make absolutely clear so that no one thinks I'm picking specifically on RSA or Windows here. Windows has had it's share of security issues over the years, but Apple OS X, Linux and other operating systems are not flawless either. They have their own security faults and incidents. The reason why Windows is such an attractive attack target is because is had 90% market penetration. If you are an exploit writer and you want to be able to compromise a remote company, of course you're going to write an exploit for the operating system they are most likely to be running.  As Apple's popularity increases over the years and as smartphones become the dominate access device I'm sure you'll see tons of OSX, iOS and Android exploits become the norm going forward.

So if we can't trust anyone, what do we do?

Stop using the Internet.

All joking aside, this would fix the trust problem.  If you never opened an email, never opened an attachment or never browsed and website and turned off your network connection you'd probably be good.  Since most people are probably rolling their eyes at this point because they recognize that this isn't practical, we need to start discussing ways that we could potentially reduce this risk. Notice I say reduce and not eliminate because I think information security is all about providing the least probable attack surface. You'll never completely eliminate security risk. Where there's a will there's a way.

Stay tuned for Part 3 where I'll talk about mitigation strategies for data security...



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: