How the Utility Computing Vision will Shape Citrix and Terminal Server

Utility Computing World 2004 takes place this week in New York City. I’ve been thinking quite a bit about how today’s Windows-based server-based computing

Utility Computing World 2004 takes place this week in New York City. I’ve been thinking quite a bit about how today’s Windows-based server-based computing (Citrix, Terminal Server, etc.) will look in the future, and how this will fit into the “utility computing” vision so many companies are talking about today.

Before looking too far ahead, I think it makes sense to answer a few questions. What is utility computing? How does that relate to today’s server-based computing?

A Two-Minute Overview of Utility Computing

Utility Computing. Grid Computing. Distributed Computing. On Demand Computing. Ubiquitous Computing. All these terms are often used interchangeably, although they all describe different aspects of the same computing vision.

The definitions vary depending on who you talk to, but the basic idea is that multiple systems can be united together to produce a large “virtual” computer that is more powerful and more redundant than any one physical system. Then, this huge virtual computer can provide applications for users. Since it’s really a system of many smaller computers, it would be simple to add capacity by plugging in a few more little computers as needed.

According to Utility Computing (an independent media company), “Utility Computing is a concept that has both a long term and an immediate definition:”

In the long term, it refers to the fact that a ubiquitous IT infrastructure will deliver all our computing needs—be they for business or entertainment. We would own far less computing assets than we do now but would instead pay for access to services delivered by "utility computing." People and companies will pay for what they use and no more. Just like electricity, computing will have become a Utility.

The ramifications for business are enormous. Legacy technology issues will be a thing of the past and the ability to swiftly up or downscale to meet demand will have a revolutionary affect on companies and the way in which they formulate strategy.

The concept will also be applied to individual users of computing, where they no longer need to buy their own computers and do regular upgrades, but instead are offered packages like they choose their satellite television services today.

At present, the infrastructure required to deliver that reality are beginning to be put into place. IBM has been working on the idea for some time, but the major technology vendors are now all jostling for position. At this early stage, their offerings may be seen as IT outsourcing, where large corporations allow dedicated service providers to take care of all their IT needs.

So really, buzzwords like “grid” computing are just underlying technologies that will enable true utility computing. Other enabling technologies might be things like Sun’s JINI technology, Think Dynamics (recently bought by IBM and incorporated into Tivoli), Softricity’s SoftGrid, and even Citrix’s MetaFrame technologies.

How does this relate to Citrix and Terminal Server?

The utility computing vision is the direction that we’ve been ultimately moved towards in today’s Citrix, Terminal Server, and server-based computing environments. For example, we use Citrix to get all application execution onto the servers. We don’t do that for the sake of building the servers; rather, we do that so we can use any application from any device over any connection (so that we can access applications “on demand,” in Citrix’s words).

With utility computing, we still want to have that “any” access, it’s just that the technology allows the centralized backend application execution to be distributed among many different physical systems.

Let’s look at a concrete real-world example of some of distributed computing technology that could enable utility computing. Let’s say a company has three servers—an email server, a web server, and a database server. In today’s world if the web server suddenly gets hammered with requests it could get bogged down, even though the email and database servers are not busy. In a distributed computing model, those other servers could dynamically allocate resources to help the web server. At first this may be in the form of a product you run on a server that could “donate” extra capacity to other servers that need it. Eventually, however, all of your servers could be generic “modules” you plug into a massive framework, and you would create virtual servers or applications that could run anywhere.

Think of this like a single copy of VMware that could be installed onto multiple physical servers at the same time. If you wanted to add more virtual servers, or if you wanted to increase capacity, you could snap in a few more servers. Blades, anyone?

In some ways, we in the Citrix and Terminal Server world are emulating this today. Some people are building their MetaFrame farms out of blades, and moving servers in and out of different silos / load-balancing groups as needed.

We already have this today with Storage Area Networks. Now that blades can boot off of drives in SAN arrays, people can build huge SANs and use them as one gigantic super-redundant drive array for an entire datacenter. While it might not be that cost-effective today, it’s certainly a cool concept.

This is why EMC’s purchase of VMware is so intriguing. Even though most analysts are questioning whether EMC can pull the two very different product families together, it’s easy to see EMC’s thinking as to why they wanted to buy VMware. (Or, as I heard it, why VMware shopped themselves to EMC.)

What’s next?

I think it’s safe to assume that the virtual computer or distributed Windows execution is several years away. Then again, a lot of companies are doing this today in the UNIX and mainframe space, so maybe some of them will be able to migrate down into the Windows area?

I’m attending Utility Computing World 2004, and I’ll be keeping my eye out for technologies, products, and ideas that could be directly applicable to today’s Windows server-based computing world.

Join the conversation

6 comments

Send me notifications when other members comment.

Please create a username to comment.

This message was originally posted by an anonymous visitor on September 7, 2004
The biggest problem that I see with this vision is the lack of parallelism in most programs. Many programs people use have trouble utilizing the hyper-threading that Pentium 4 processors offer, much less dual-processors or more. This vision is a good thing for non-demanding tasks like word processing and the like, but less good for performance apps like games. Until more power programs are re-coded to handle a multi-processing environments, then any program that is speed limited by a single processor will not be a good candidate for this type of computing.
Cancel
This message was originally posted by Ives Stoddard on November 2, 2004
In addition to the previous response, multi-core processors are only going to make this more difficult for software developers. Not only does HT simulate an additional processor, but AMD, IBM and Intel are all going to shift to multi-core processors to get around current cap of processor speed increases. Scientific American (November '04) has two interesting articles about this.
Cancel
This message was originally posted by Treb Ryan on November 24, 2004
The problem with most 'On-Demand' proponents is that they are not taking the proposition far enough. While there is no question that a new generation of CIOs are used to 'paying as you go' and a new generation of CFOs are looking for shared risk with their providers, the real transformation in business isn't going to happen by sharing risk of things like spare processing power or excess disk capacity. Only by offering software and complete solutions on demand will companies be able to compete in the market. In five years' time the question will not be whether to buy dedicated hardware or to lease utility hardware grids, it will be whether to rent software in a ‘Software as a Service' (Saas) model or to outsource the entire function to a BPO firm.
Cancel
<a href="www.thuriam.com">bpo<a/>
Cancel
fine
Cancel
Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close