Today's vision of a Fluid Computing future (led by IBM's Zurich Research Lab) involves applications that run on various client devices that stay synchronized with each other. Your laptop, Palm Pilot, mobile phone, and wristwatch would run some form of the same application at the same time, allowing you to access it from any one of them. Special considerations can be made when multiple devices can connect to each other at the same time. (Maybe the phone and the watch are constantly communicating via Bluetooth?) From a practical standpoint, implementation of this vision should be relatively straightforward over the next few years as developers figure out how to make real .NET applications. But what about Citrix and server-based computing? How does that fit the Fluid Computing vision?
Today's Citrix MetaFrame allows users to roam from device to device while accessing the same server-based application session. While this allows a certain amount of fluidity, there are several shortcomings to this. The most obvious is that all the client devices must be more-or-less the same. Sure, they don't all need to run the same OS, but they do all need to have the same types of keyboards, large displays, etc. Instead, what if MetaFrame was smart enough to know the properties of the client device the user connects from? (Sort of like Citrix's now defunct Project Vertigo.) Forget max resolution and color depth . I'm talking about whether a client device supports handwriting and ink, voice recognition, a touch screen, or whether it even has a keyboard. The server-based application could be smart enough to change its GUI based on the size and nature of the screen.
We're a long way off from seeing Fluid Computing implemented in a practical way--especially in a Citrix environment. Even so, it's interesting to think about. Now, if only I could find use my cell phone to play Quake.