Our jobs as IT professionals delivering end user computing are pretty complex!
In the old days, we just had to worry about delivering the Windows OS to Windows desktops. The arrival of laptops fifteen years ago changed some of the logistics (we had to learn about new things like modems, RRAS, and VPNs), but at the end of the day our job was still about delivering Windows desktops to users.
Then VDI and desktop virtualization came out. At first people started to think that it was a "game changer" and was really going to transform the desktop experience, but ultimately (again), our job was really just about delivering Windows desktops to users. Even though VDI introduced more complexities like remoting protocols, client device weirdness, and licensing headaches, Windows was still Windows.
As long as the Microsoft Windows desktop was the face of the IT department, we could solve all end user computing scenarios by understanding Windows applications and the registry and user profiles and GPOs and DLLs and login scripts.
This was fine from the mid nineties until about five years ago. Then in 2007, everything changed for two reasons.
Reason 1. The iPhone
In January 2007, Apple announced the iPhone. Even though the first iPhone didn't support third-party apps, the iPhone (and later devices inspired by it like iPads, Androids, and Blackberries with touch screens) set the precedent for touch-based non-mouse-driven computing. Users bought these devices by the millions and brought them into the office, forcing us to support them.
Our initial knee-jerk reaction to supporting these touch-based devices was simply to fall back on what we knew well—the Windows desktop. In what now seems naively hilarious, we actually thought that we could "solve" the tablet and smart phone problem in the same way we solved the laptop problem and the working-from-anywhere problem: we'll just deliver our familiar safe Windows desktop:
Figure 1. Our comical attempts at solving the tablet problem
While running a Windows desktop on a touch-based device is a great party trick, it's hardly a long-term solution to users' desires to do actual work on these types of devices.
Reason 2. The Cloud
At the same time that touch-based devices were becoming popular, the concept of The Cloud as a platform for real business and powerful consumer applications was also blossoming. Technology that cost hundreds of thousands of dollars in 2007 was available on-demand via the cloud for pennies per hour by 2010. Installation and configuration time of cloud-based applications and services was measured in minutes instead of months.
For example, in the old days (like 2006), if an end user or a department wanted any kind of technology, they had to get it from IT. Even if they wanted to go "rogue" and buy their own application, they'd have to smuggle a server into the building that someone would eventually find out about. But by 2010, the availability of services like Gmail, Dropbox, Salesforce, Evernote, Join.me, Amazon Web Services, 4G networks, etc. meant that individual end users and departments could do whatever they wanted. (And not only was it difficult for IT to prevent them, but in most cases IT didn't even know it was going on!)
What do you get when you combine corporate Windows desktops, touch-based client devices, and the cloud?
If you combine the popularity of touch-based devices, the ease of access to powerful applications via the cloud, and the fact that IT's entire identity has been wrapped up in the Microsoft Windows desktop, you get the perfect storm of end user computing that we're in today in 2012.
And we can't keep doing what we've been doing the past twenty years.
If we decide to deliver traditional Windows desktops to our touch-based client devices, the user experience is horrible. The buttons are too small, clicking requires awkward pinching and zooming and panning and scrolling, and typing obscures half of the screen which is already much smaller than what the Windows desktop applications have been designed for.
Users know how great native touch-based apps are (whether they're platform native for the local device or whether they're web apps tuned for touch-based inputs). So if we try to deliver a remote Windows desktop app to a user on a touch-based device, that user will use our app for about a week before crying out, "This sucks!" and going to the device's app store to find a more appropriate app. Unfortunately this is typically the point where we lose control over the user. A user who thinks VDI-based Microsoft Word sucks on an iPad will try to find iOS Word (which doesn't exist) and instead end up with something like QuickOffice. Then they'll realize that QuickOffice doesn't hook into our official VPN and SMB-based file share, but it does hook into Dropbox… And by this point, we've now lost control of that user's computing environment, and our Windows desktop-based environment that we painstakingly built with app virtualization and user workspace management and traditional Windows apps and browser shortcuts is completely useless!
The solution? MDM, MAM, BYO, new security, identity management, cloud-based app integration, service delivery management...
Of course there are ways to deal with this new reality of end-user computing. We can deploy mobile device management (MDM) and mobile application management (MAM) products to manage whatever devices the users want to use. We can implement BYO programs to enable users to work with whatever types of laptops and devices they want. We can reconfigure our firewalls and VPNs to enable a consistent user experience from anywhere. We can deploy modern file syncing products with encryption DRM to ensure that our users have the files they need on any device running on any platform. We can install identity management solutions which link internal user accounts to cloud- and SaaS-based applications so users only have to login once anywhere.
And of course, we can use the various flavors of desktops virtualization to deliver our traditional Windows desktop applications to users in a way that's appropriate for their current device and usage scenario.
In our book The VDI Delusion, we argued that the way to be successful with desktop virtualization was not to create a desktop virtualization strategy, but instead to create a "Windows" strategy. If you figured out how to manage Windows, then it doesn't matter whether you deliver it to a desktop, a laptop, or a VDI session.
Still, we had a separation of roles between people who cared about desktop virtualization versus those who cared about the consumerization of IT. But now we realize there is no separation. The consumerization of IT affects every practitioner who's responsible for delivering Windows desktops (since those Windows desktops will continue to be an ever-smaller percentage of the overall use cases). And those who focus on the consumerization of IT need to think about desktop virtualization since so many of today's critical business apps are locked up inside Windows.
In 2012, it's not "consumerization" versus "desktop virtualization." It's not even consumerization and desktop virtualization, because these two things are not mutually exclusive of each other. Instead, they're both pieces of what we may now call "end user computing." Desktop virtualization is about delivering all types of applications, data, and working environments, and consumerization is one of the pressures that affects how we deliver that environment.
(Note: You must be logged in to post a comment.)
If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.