In the future, will datacenter-hosted VDI desktops be two-thirds of all use cases?

At BriForum 2010 last week, I was lucky enough to co-present a breakout session with Chetan Venkatesh called "Deconstructing Brian's Paradox: VDI is here, like it or not."

At BriForum 2010 last week, I was lucky enough to co-present a breakout session with Chetan Venkatesh called "Deconstructing Brian's Paradox: VDI is here, like it or not." As you can probably guess, I'm the "Brian" in "Brian's Paradox," so it was a really fun session to present! The idea for this session was based on the culmination of five separate articles I wrote over the past year:

Each of those articles is interesting on its own, but the common theme is that my feeling is that VDI (defined as datacenter-hosted desktops) is not the ultimate savior that some are making it out to be; and that in fact VDI is complex and expensive, and (apart from a few niche cases) most of the world will evolve to use some kind of client-based computing model where a dynamically created virtual machine runs on a client device (either via a Type 1 or Type 2 virtualization environment).

Chetan Venkatesh does not agree with this. Specifically where I say that 90% of the world will use client-based virtualization, Chetan believes this will be more like 20%. Chetan and I both live in the Bay Area, and we get together for dinner every few months. Earlier this year we started talking about my 90% versus his 20% client-based future, and we felt this would be a great BriForum session.

And so I present to you, Chetan's vision of why VDI is here to stay, and why future desktop models will be 65% datacenter-based.

Most of the rest of this article is based on Chetan's presentation at BriForum and the ensuing discussion between him, me, and the audience.

The setup

Chetan opens his case by saying that in today's world of 2010, there are many different desktops models:  physical, physical with virtual storage, terminal server, VDI, client-based desktops on Type 2 environments, client-based desktops on Type 1 environments, etc. He then goes on to predict that by 2015, a typical large enterprise will deliver 65% of  its desktops via VDI, 5% Terminal Server, 20% client-based virtualization, and 10% traditional desktops.

So how will we get from today (which is almost 100% physical) to a world where physical is only 10% and VDI is 65%? Chetan outlined three themes that will get us there:

  • Personal Computing is changing
  • Moore's Law (and its impact on the datacenter)
  • Evolving deployment models

Personal Computing is Changing

This is pretty straightforward. Chetan explained that the notion of the personal computer is changing (and in fact the notion of personalization is changing). Today's applications like Facebook, LinkedIn, Twitter, Wave, etc. all make the desktop less important. To the user it becomes a "rich profile & content of what I like and what I trust" instead of the corporate desktop which is a "rigid set of policies of what I can and cannot do."

By 2015, the PC won't be a primary device, replaced instead by consumption-oriented devices (which combined will be the new "personal computer"). Windows will become middleware—just another place to run apps that's nothing more than a connection between users and the enterprise apps. Users of 2015 won't care about app installation and management, and they'll force corporations to accept their new "personalities."

So if that's our layout... how are we trying to solve this today?

pc typwriter

Yikes! Chetan claims that today's approaches to desktop virtualization are really not game-changing at all. If a PC is a typerwriter, then running a Windows instance in a client-based VM is just an electronic typewriter. Sure there are some more electronics and neat features, but it's still a typewriter!

Moore's Law & the Datacenter

As an intro into the Moore's Law conversation, Chetan talked about dematerialization & liquidity. "Dematerialization" is the concept of transforming a physical object into an abstract concept. (Money used to be paper and coins, now it's just number in a computer. Mortgages used to be loans from a single bank, now they're sliced and bought and sold online.) Dematerialization of the desktop provides the liquidity where the desktop doesn't just run within the boundaries of a single box. This is bigger than just flowing the entire monolithic desktop VM from a one host to another—that's nothing more than the electric typewriter. Dematerialization means breaking up the memory and disk and data and CPU and personalization so that each can run in the most performant and appropriate way. That provides the liquidity for each desktop element to continually flow to wherever the best place for it is.

So what the heck does this mean? Consider the architecture of the desktop in 2015:

  • The rack is the new computer
  • 10G Ethernet is the new bus
  • The hypervisor is the new kernel
  • The software mainframe is the new OS

The takeaway from this is that to get the compute liquidity, the desktop can't run as a VM on a client—it's got to run in a datacenter. The datacenter has the shared resources that lead to better flexibility. The datacenter will allow each desktop to dial-up / dial-down resources. The datacenter will let desktops live migrate VMs, users, and capacity.

But in today's world, people (like me) are afraid of the datacenter. It's expensive and complex. Chetan points out that Moore's Law means the datacenter becomes more attractive each year, while it's virtually meaningless for desktop hardware. Consider how Moore's law applies to datacenter desktops:

Year VMs / server VMs / Rack Cost / User
2010 70 1120 $400
2012 150 2400 $330
2014 300 4800 $260
2016 600 9600 $150


When it comes to desktops, who cares about Moore's Law? Sure it means that we can get more processing for our money, but desktop computers are more-or-less stuck at the same price points they've been at for the past decade. And doubling the processing of a desktop doesn't change the computing model at all. (Again it's just like a faster electric typewriter.)

New Deployment Models

The final theme Chetan outlined was about the evolving deployment models for desktops. VDI is perfect for "at scale" deployments. VDI is perfect for the "containerization" of IT (vBlock, factory-built VDI pods, etc.). All of this will enable us to install thousands of desktops in only dozens of hours.

All of this leads to the datacenter

So the desktop is becoming less about the personal computer. A lot of applications that users care about will be procured outside of traditional IT channels. But IT isn't going away, and for the desktops and apps that IT can provide, Chetan feels they can best be delivered from the datacenter.

He believes that all of this will combine to allow VDI to deliver a better experience than what's possible from a client. "Imagine that everything is instant. Apps open instantly. Docs open instantly. Everything is so snappy and perfect. That's an experience that a dematerialized desktop can deliver." At that point the users can vote with their feet, so to speak. Combine that with the security, cost, reliability, etc., and he believes VDI is a no-brainer for the majority of use cases.

Chetan's closing thoughts: VDI is not just the sum composite of knee-jerk reactions to PC management, but rather it's a long-term transformational vector—the natural evolution of computing, and something that can't be ignored.

What do you think? It's pretty much the exact opposite of what I thought, but he makes some great points?

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I do think Chetan makes some great points, but as I discussed with him after the session I think he doesn't go far enough in recognising the consequences of his thinking.

To get the advantages of Moores law and Chetans 9600 VM's per rack you need at least 9600 users per organisation.

If you are going to reach 65% of the market, you need to address more than large enterprise you need the SME market too.

So to reach Chetans prediction of 65% VDI effectively what we need is a way to provide the advantages of large scale deployments of desktops and apply them to the SME.  We already have a definition for this and it is 'Desktop as a Service'.

This now gives us a problem with reaching the goal by 2016, as it not only means a technical change in large enterprises, it means a psychological change in the minds of millions of SME's around the globe.  There is too much mental inertia in the industry for this too happen.

So although I am convinced (and have been for a while) by the advantages of dematerialism and liquidity I think the timescale is a little ambitious :).


It was an interesting analysis and session at Briforum, but I'm not convinced that the VDI model will be used for 2/3rds of the desktops.  Right now, what percentage of users require offline mobility? The VDI model does not support that use case (other virtual desktop models do).  What other use cases will VDI model not support? There will be more, which is why you need options.

Also, Moore's law is a great estimating tool assuming all other factors remain constant, but they don't. What will the next operating systems and applications require from a processor/memory perspective?  As Moore's law keeps increasing the speed and power of the systems, applications and operating systems keeps sucking down the resources almost like a smorgasbord.  Don't believe me?  Look at all published scalability reports. VMware, Microsoft and Citrix are touting 100+ virtual desktops running Windows XP but only 60 virtual desktops when running Windows 7.  We lost 40% of our capacity with just the OS and we haven't even upgraded applications yet!!!!

If you look at the numbers Chetan posts, it looks he is seeing 70 desktops now. This leads me to believe that he is looking at a Windows XP platform with real apps.  Will people still use Windows XP in 2016?  Yes, but not as many as now.  


Daniel hit the nail in the head.

With much faster CPUs and with architectures that can handle way more memory and faster buses, come resource hungry OSs and Frameworks.

I do not want to sound like a broken record but I posted about this many months ago and coined the term "Claudio's Law" that says "The time to boot the latest and greatest Windows OS with the latest and greatest Office suite running on the latest most common desktop hardware is constant" and if you run tests today on a 10 year old laptop running Windows 2000 or 98 with Office 97 you will notice it takes the EXACT same amount of time to load Windows 7 and Office 2010 on today's desktops or laptops.

Again, why are we seeing 200, 300MB+ Frameworks like .NET? Things will only get worse down the road. Bigger, fatter, slower frameworks that will not look that slow thanks to much faster HW.

So Chetan's argument would be valid if in 2015 people were hosting Windows XP and Windows 7 for their clients what I would think will not be the case.

Windows 9 with Office 2014 will look as slow/fast as Windows 98 and Office 97 looked back in early 2000.

As I say, even today can we get more than 70 VMs per server? Sure. I was able to run 200 on a server with 8GB. But I was running DOS 5.

In 2015 I do agree hardware will get better and faster but as Daniel pointed out the 2015 OS/Productivity Tools will use way more resources, reducing this 'scalability gain'.

And one more time, as I wrote in the past, unless some major breakthrough not only in HW but in the way OSs are designed comes to the picture, Claudio's Law will always be there.


This is all fine and dandy but it still doesn't resolve the need to make the data portable.  Distributing the OS is one thing... however providing access to corporate data is an entire other achievement.

In order for VDI to be the answer all applications are going to have to be SaaS... at which point the OS is irrelevant anyway, right?

Until then, TS/Citrix is the only solution that addresses both application and data management!


Why don't we just put a little data center in everyone's laptop and be done already! :)



Why can't we keep the data internal to the data center? In certain geographies (like North America) we have high bandwidth and low latency. We always say you need to keep the data close to the application front-end, but does location really matter? If the speed is high enough and the latency low enough, can't we keep the data in the data center while our desktops live elsewhere?  


There are some big gaping holes in that argument.  Yes, everything will become more interconnected and more and more things will move to the cloud.  But that doesn't mean server-hosted VDI will become the predominant model for desktop computing - in fact I think computing will become *more* decentralized as things become more interconnected.  We will have local computation everywhere (your mobile phone, your TV/appliances, your car...) because it is cheaper, faster, and more reliable.

1. That which can be distributed should be distributed - for reliability, resilience, and performance reasons.  There are systems where you have no choice but to centralize because there are too many interdependencies.  However desktops are exactly the opposite!  The desktop workload is perfectly parallelizable and distributable.  There is no good reason my desktop needs to run on the same physical hardware as someone else's desktop, and in fact you lose a lot (interactive performance, cost-effectiveness, ability to work disconnected, simplicity) by doing so.  More on the perils of unnecessary centralization:

2. Centralized management does not imply centralized execution.  You can get all the benefit of centralizing the management of desktops without moving the execution into the data center.

3. As everyone else has pointed out, desktop workloads will bloat over time as new features are added (aka "Claudio's Law"), so the VDI scaling arguments are bogus unless you expect most users to be still using Windows XP with the same apps in 2016.

I'm with Brian here in that I think VDI will get bypassed for SaaS/cloud apps, which actually make sense.  MokaFive VP Products Purnima Padmanabhan recently wrote a blog entry on this very topic (VDI vs cloud) :


The argument makes perfect sense to me.  But I will side with the rest of you and say I want offline VDI as well. In my company we are going to keep laptops, PDA's and other devices.  I think the point that seems to be missing here is operational efficiency. What I care about most is the ability to manage VDI (currently deployed to 10% of my companies workforce) with the same tools we use for our distributed desktops.  AV, Patch, push application deployment etc.  Once we can do this it won't matter where a user computes if it makes sense to run in VDI great if not they go to a laptop.  The help desk and and desktop management teams won't know the difference.


I find this topic very interesting.

IF SBC is the major percentile of the desktop execution environment THEN thin clients will become more than niche OR an overabundance of under-utilized end user devices will be sprawled throughout the enterprise.

For an enterprise to choose thin clients VS fat clients, there must be a MAJOR price difference otherwise I would prefer to purchase a more flexible machine.

With VDI becoming SBC is will be a MAJOR shift with how manufacters such as DELL, HP, IBM sell their devices to customers. Why would we buy fat clients if they aren't utilized?

If I were Chetan and was so sure about VDI being more SBC than CBC I would buy some stock in thin client vendors.

Server Virtualization is used to tackle a variety of concerns, one of which is to utilize server hardware which is under-utilized.

Shouldn't Desktop Virtualization be used to do the same?

It is all agreed that management should be centralized, but why all computing?

IMO, end user devices (CBC) will be the major desktop execution environment. But I do agree, the execution environment should determined where it's the most optimal. There is just two sides to the story.


@Icelus - I think you have some facts incorrect.  IBM does not sell PC's they gave that up when they sold the business to Lenovo.  Guess they saw the writing on the wall.  IBM has been working on delivering desktops from the cloud for some time now.  In my opinion they are looking to deliver a secure shared cloud which will cost less than buying a PC.   They may or may not be there right now but they are way further along than Dell and HP.

Investing in thin clients?  Why invest in a thin client company once Dell starts making them or buys Wyse for thier zero touch IP they will drive down the price and the smaller players will go out of business.  Dells model is to sell fo 2% above cost so I don't see how that's a good investment.


@Watson - oh whoops, I did say IBM. my mistake.

I am unaware of Dell's model and I doubt it's that poorly designed, but I get your point.

still, it's hard press to argue the price comparision of a fat pc vs. a thin client. Even in 5 years it will be a tough sell.

Unless there is a true networked KVM that is nice and cheap and has no brains, I see the majority of execution on the client.

But regardless, what's the point of purposely dumming down your end user devices to justify SBC execution?

Computing is going to evolve in the Datacenter and the client, as it always have. The most robust solution will address both use cases working in harmony.


@Daniel  Thats a great idea except have you ever actually tried this on a client/server type application.  The bulk of our clients are CPA firms that use many client/server based apps for Audits, Tax Returns, etc.  The size of the traffic the moves between the client and the server does not work across a broadband internet connection anywhere near what would be considered acceptable.  In addition even being in the DC/Baltimore area where we have some of the best broadband access in the country there are still locations where 3G is barely fast enough to support ica.  

I like the dream... but we aren't there yet.  And in my personal opinion I don't know that we would ever care to be... keep the data where it is safe and secure and running at LAN speed.....