Utility Computing World 2004 takes place this week in New York City. I’ve been thinking quite a bit about how today’s Windows-based server-based computing (Citrix, Terminal Server, etc.) will look in the future, and how this will fit into the “utility computing” vision so many companies are talking about today.
Before looking too far ahead, I think it makes sense to answer a few questions. What is utility computing? How does that relate to today’s server-based computing?
A Two-Minute Overview of Utility Computing
Utility Computing. Grid Computing. Distributed Computing. On Demand Computing. Ubiquitous Computing. All these terms are often used interchangeably, although they all describe different aspects of the same computing vision.
The definitions vary depending on who you talk to, but the basic idea is that multiple systems can be united together to produce a large “virtual” computer that is more powerful and more redundant than any one physical system. Then, this huge virtual computer can provide applications for users. Since it’s really a system of many smaller computers, it would be simple to add capacity by plugging in a few more little computers as needed.
According to Utility Computing (an independent media company), “Utility Computing is a concept that has both a long term and an immediate definition:”
In the long term, it refers to the fact that a ubiquitous IT infrastructure will deliver all our computing needs—be they for business or entertainment. We would own far less computing assets than we do now but would instead pay for access to services delivered by "utility computing." People and companies will pay for what they use and no more. Just like electricity, computing will have become a Utility.
The ramifications for business are enormous. Legacy technology issues will be a thing of the past and the ability to swiftly up or downscale to meet demand will have a revolutionary affect on companies and the way in which they formulate strategy.
The concept will also be applied to individual users of computing, where they no longer need to buy their own computers and do regular upgrades, but instead are offered packages like they choose their satellite television services today.
At present, the infrastructure required to deliver that reality are beginning to be put into place. IBM has been working on the idea for some time, but the major technology vendors are now all jostling for position. At this early stage, their offerings may be seen as IT outsourcing, where large corporations allow dedicated service providers to take care of all their IT needs.
So really, buzzwords like “grid” computing are just underlying technologies that will enable true utility computing. Other enabling technologies might be things like Sun’s JINI technology, Think Dynamics (recently bought by IBM and incorporated into Tivoli), Softricity’s SoftGrid, and even Citrix’s MetaFrame technologies.
How does this relate to Citrix and Terminal Server?
The utility computing vision is the direction that we’ve been ultimately moved towards in today’s Citrix, Terminal Server, and server-based computing environments. For example, we use Citrix to get all application execution onto the servers. We don’t do that for the sake of building the servers; rather, we do that so we can use any application from any device over any connection (so that we can access applications “on demand,” in Citrix’s words).
With utility computing, we still want to have that “any” access, it’s just that the technology allows the centralized backend application execution to be distributed among many different physical systems.
Let’s look at a concrete real-world example of some of distributed computing technology that could enable utility computing. Let’s say a company has three servers—an email server, a web server, and a database server. In today’s world if the web server suddenly gets hammered with requests it could get bogged down, even though the email and database servers are not busy. In a distributed computing model, those other servers could dynamically allocate resources to help the web server. At first this may be in the form of a product you run on a server that could “donate” extra capacity to other servers that need it. Eventually, however, all of your servers could be generic “modules” you plug into a massive framework, and you would create virtual servers or applications that could run anywhere.
Think of this like a single copy of VMware that could be installed onto multiple physical servers at the same time. If you wanted to add more virtual servers, or if you wanted to increase capacity, you could snap in a few more servers. Blades, anyone?
In some ways, we in the Citrix and Terminal Server world are emulating this today. Some people are building their MetaFrame farms out of blades, and moving servers in and out of different silos / load-balancing groups as needed.
We already have this today with Storage Area Networks. Now that blades can boot off of drives in SAN arrays, people can build huge SANs and use them as one gigantic super-redundant drive array for an entire datacenter. While it might not be that cost-effective today, it’s certainly a cool concept.
This is why EMC’s purchase of VMware is so intriguing. Even though most analysts are questioning whether EMC can pull the two very different product families together, it’s easy to see EMC’s thinking as to why they wanted to buy VMware. (Or, as I heard it, why VMware shopped themselves to EMC.)
I think it’s safe to assume that the virtual computer or distributed Windows execution is several years away. Then again, a lot of companies are doing this today in the UNIX and mainframe space, so maybe some of them will be able to migrate down into the Windows area?
I’m attending Utility Computing World 2004, and I’ll be keeping my eye out for technologies, products, and ideas that could be directly applicable to today’s Windows server-based computing world.