At BriForum 2010 last week, I was lucky enough to co-present a breakout session with Chetan Venkatesh called "Deconstructing Brian's Paradox: VDI is here, like it or not." As you can probably guess, I'm the "Brian" in "Brian's Paradox," so it was a really fun session to present! The idea for this session was based on the culmination of five separate articles I wrote over the past year:
- Introducing “Madden’s Paradox”: the gotcha of the VDI versus TS debate, August 2009
- Everyone who needs VDI already has it, Jan 2010
- Prediction: 90% of the future "VDI" will be client-based, Feb 2010
- What the Windows desktop will look like in 2015: Brian's vision of the future, April 2010
- If you can take a VDI instance "offline," then why don't you just always run it offline? ("Madden's Offline Paradox?"), May 2010
Each of those articles is interesting on its own, but the common theme is that my feeling is that VDI (defined as datacenter-hosted desktops) is not the ultimate savior that some are making it out to be; and that in fact VDI is complex and expensive, and (apart from a few niche cases) most of the world will evolve to use some kind of client-based computing model where a dynamically created virtual machine runs on a client device (either via a Type 1 or Type 2 virtualization environment).
Chetan Venkatesh does not agree with this. Specifically where I say that 90% of the world will use client-based virtualization, Chetan believes this will be more like 20%. Chetan and I both live in the Bay Area, and we get together for dinner every few months. Earlier this year we started talking about my 90% versus his 20% client-based future, and we felt this would be a great BriForum session.
And so I present to you, Chetan's vision of why VDI is here to stay, and why future desktop models will be 65% datacenter-based.
Most of the rest of this article is based on Chetan's presentation at BriForum and the ensuing discussion between him, me, and the audience.
Chetan opens his case by saying that in today's world of 2010, there are many different desktops models: physical, physical with virtual storage, terminal server, VDI, client-based desktops on Type 2 environments, client-based desktops on Type 1 environments, etc. He then goes on to predict that by 2015, a typical large enterprise will deliver 65% of its desktops via VDI, 5% Terminal Server, 20% client-based virtualization, and 10% traditional desktops.
So how will we get from today (which is almost 100% physical) to a world where physical is only 10% and VDI is 65%? Chetan outlined three themes that will get us there:
- Personal Computing is changing
- Moore's Law (and its impact on the datacenter)
- Evolving deployment models
Personal Computing is Changing
This is pretty straightforward. Chetan explained that the notion of the personal computer is changing (and in fact the notion of personalization is changing). Today's applications like Facebook, LinkedIn, Twitter, Wave, etc. all make the desktop less important. To the user it becomes a "rich profile & content of what I like and what I trust" instead of the corporate desktop which is a "rigid set of policies of what I can and cannot do."
By 2015, the PC won't be a primary device, replaced instead by consumption-oriented devices (which combined will be the new "personal computer"). Windows will become middleware—just another place to run apps that's nothing more than a connection between users and the enterprise apps. Users of 2015 won't care about app installation and management, and they'll force corporations to accept their new "personalities."
So if that's our layout... how are we trying to solve this today?
Yikes! Chetan claims that today's approaches to desktop virtualization are really not game-changing at all. If a PC is a typerwriter, then running a Windows instance in a client-based VM is just an electronic typewriter. Sure there are some more electronics and neat features, but it's still a typewriter!
Moore's Law & the Datacenter
As an intro into the Moore's Law conversation, Chetan talked about dematerialization & liquidity. "Dematerialization" is the concept of transforming a physical object into an abstract concept. (Money used to be paper and coins, now it's just number in a computer. Mortgages used to be loans from a single bank, now they're sliced and bought and sold online.) Dematerialization of the desktop provides the liquidity where the desktop doesn't just run within the boundaries of a single box. This is bigger than just flowing the entire monolithic desktop VM from a one host to another—that's nothing more than the electric typewriter. Dematerialization means breaking up the memory and disk and data and CPU and personalization so that each can run in the most performant and appropriate way. That provides the liquidity for each desktop element to continually flow to wherever the best place for it is.
So what the heck does this mean? Consider the architecture of the desktop in 2015:
- The rack is the new computer
- 10G Ethernet is the new bus
- The hypervisor is the new kernel
- The software mainframe is the new OS
The takeaway from this is that to get the compute liquidity, the desktop can't run as a VM on a client—it's got to run in a datacenter. The datacenter has the shared resources that lead to better flexibility. The datacenter will allow each desktop to dial-up / dial-down resources. The datacenter will let desktops live migrate VMs, users, and capacity.
But in today's world, people (like me) are afraid of the datacenter. It's expensive and complex. Chetan points out that Moore's Law means the datacenter becomes more attractive each year, while it's virtually meaningless for desktop hardware. Consider how Moore's law applies to datacenter desktops:
|Year||VMs / server||VMs / Rack||Cost / User|
When it comes to desktops, who cares about Moore's Law? Sure it means that we can get more processing for our money, but desktop computers are more-or-less stuck at the same price points they've been at for the past decade. And doubling the processing of a desktop doesn't change the computing model at all. (Again it's just like a faster electric typewriter.)
New Deployment Models
The final theme Chetan outlined was about the evolving deployment models for desktops. VDI is perfect for "at scale" deployments. VDI is perfect for the "containerization" of IT (vBlock, factory-built VDI pods, etc.). All of this will enable us to install thousands of desktops in only dozens of hours.
All of this leads to the datacenter
So the desktop is becoming less about the personal computer. A lot of applications that users care about will be procured outside of traditional IT channels. But IT isn't going away, and for the desktops and apps that IT can provide, Chetan feels they can best be delivered from the datacenter.
He believes that all of this will combine to allow VDI to deliver a better experience than what's possible from a client. "Imagine that everything is instant. Apps open instantly. Docs open instantly. Everything is so snappy and perfect. That's an experience that a dematerialized desktop can deliver." At that point the users can vote with their feet, so to speak. Combine that with the security, cost, reliability, etc., and he believes VDI is a no-brainer for the majority of use cases.
Chetan's closing thoughts: VDI is not just the sum composite of knee-jerk reactions to PC management, but rather it's a long-term transformational vector—the natural evolution of computing, and something that can't be ignored.
What do you think? It's pretty much the exact opposite of what I thought, but he makes some great points?