Blog posting on BrianMadden.com might be a bit light this week as we scramble to put the final touches on our BriForum 2013 London conference which takes place this Thursday and Friday in London. (If you haven't registered yet and like to attend, you still can.) But much like yesterday's post, I've got a bunch of little ideas that I'm interested in getting out there, so I figured this week is a good week for them.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
What I'm interested in today is that idea that thanks to the recent improvements in VDI like block-level single instance storage which makes persistent disk images technically and economically feasible (Atlantis, DataCore, GreenBytes, Tegile, etc.), and hardware-based protocol processors (NVIDIA K1, Teradici APEX), VDI is now possible in many more situations than it has been in the past.
I was having a conversation about this at a conference last week, and one of the attendees asked, "So you're saying that our days of making trade-offs for VDI are over?" Well, yes and no. It's not that we don't have to make trade-offs to go with VDI in 2013. The difference is that thanks to these two technological advancements (and Moore's Law in general), we get to pick which trade-offs we'd like to have for VDI.
For example, with the VDI of the 2006-2012 era, the technology limitations of the day meant that to get the benefits of VDI, we had to have the trade-offs of shared images (which themselves meant no user installed apps or that we could only use apps that were compatible with app virtualization). Or if we wanted persistent images, we had to deal with the trade-offs of only 8 IOPS per user or else be ready to spend over $1,000 per user for storage. In those days if we wanted to use things like Aero Glass or 3D or graphics apps, we had to use blade workstations instead of VMs, meaning we had to be ready to spend about $3,000 per user instead of $200.
In 2013, you have the choice of what trade-offs you'd like. If you want to use shared non-persistent, you can. Or if you want persistent, you can have them too. Each has it's own advantages and disadvantages, but from a pure technical standpoint, either is possible. The same is true for the graphics. If you want the cheapest smallest servers to serve out non-graphically intense apps only, you can do that. And if you want to buy servers with some room for expansion and plug in some NVIDIA or Teradici hardware offload cards to support more types of applications and web browsing, you can do that too.
The key is that in 2013, we finally have lots of different options for VDI. That's the big difference versus a few years ago when VDI was always inferior to traditional desktops. In 2013, if you want to use VDI to deliver an inferior desktop, that's just another option, not a requirement.