Providing Windows Applications to Users: Nine Different Theories and Architectures - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

Providing Windows Applications to Users: Nine Different Theories and Architectures

Written on Mar 13 2006 25,065 views, 27 comments


by Brian Madden

Longtime readers of my work know that I believe that IT exists for one single reason—to provide access to applications for end users. As little as ten years ago this was relatively easy. All we had to do was install the applications on the users’ computers. But then came automated software distribution tools. Then Citrix. Then VMware, bladed PCs, and streaming.

In today’s world, the job of IT is more complex. We usually end up using whatever de facto technology method we’ve been using to give users access to their applications. But if we take a step back, we’ll see that there are actually quite a few different (and very real) ways to provide these applications to users.

In this article, I’ll look at nine different application access architectures that we can use to provide Windows applications to users, and I’ll evaluate the pros and cons of each.

The options are:

  1. The old way. Install each application on the end user’s computer.
  2. Automated Software Distribution. Use a tool like SMS or Altiris to remotely install and update applications on end users’ computers.
  3. Citrix / Server-Based Computing. Install the application centrally on a terminal server and provide RDP or ICA access from the client device.
  4. Application Streaming. Use something like Softricity to stream the application to the user’s device on demand.
  5. Operating System Streaming. Use something like Ardence to stream the entire disk image (OS and all) to the user’s client device.
  6. Bladed PC. Install Windows XP on a server blade and then provide 1-to-1 remote access via XP’s built-in RDP remote desktop functionality.
  7. VMware PC. Build a huge VMware server and divide it into multiple VMs, with each VM running Windows XP. Provide remote access via XP’s built-in remote desktop.
  8. VMware Clients within Terminal Server / Citrix Sessions. Build a server and install terminal services and Citrix. Install VMware Workstation (or Microsoft Virtual PC) as a publish application in Citrix. Then “publish” a VMware disk image for each user. Users connect to the published VM via ICA.
  9. The Future. Application execution components can execute on whichever backend systems they need (in a grid-like way), and presentation components can be displayed and consumed wherever they are needed.

Let’s take a look at each option more in-depth.

Option 1. The Old Way

Install each application on each end user’s computer, just like you’ve been doing for the past 20 years.

Pros

  • No application conflicts
  • We all know how to do this already
  • No servers are required

Cons

  • Applications need to be installed and updated manually
  • No way of knowing who has access to what
  • Applications can become corrupted or messed up by users
  • No easy backup
  • A single PC crash will take down that worker until IT can build a new PC.
  • Application access is based on physical machine (i.e. if a user walks up to a PC, they can use whatever is on it)

Option 2. Automated Software Distribution

Use a tool like SMS, Altiris, or ZENworks (does that even still exist?) to remotely install and update applications on end users’ computers.

Pros

  • Centralized management of applications
  • Applications installed on end user PCs can be inventoried

Cons

  • You have to learn how to package applications and updates
  • Applications can become corrupted or messed up by users
  • No easy backup
  • A single PC crash will take down that worker until IT can build a new PC.
  • Application access is based on physical machine (i.e. if a user walks up to a PC, they can use whatever is on it)

Option 3. Citrix / Server-Based Computing

Install the application centrally on a terminal server and provide RDP or ICA access from the client device.

Pros

  • Centralized management and configuration
  • Centralized backup
  • Connections from any device
  • Policy-based access (Only access certain application capabilities from certain clients in certain situations)

Cons

  • All application execution happens centrally, even when a client device is capable of doing work.
  • Requires major build-out of datacenter to create and add capacity (although this means that HP or IBM will invite you to their golf outings).
  • Client devices must be connected to the network in order to user their applications.
  • You have to learn how to install applications into multi-user environments
  • Not all applications will work (or will be practical) in multi-user environments


Option 4. Application Streaming and Virtualization

First of all, the terms “virtualization” and “streaming” are both trendy right now, and have both been taken over by many vendors to mean many different things. In this case, I’m referring to the ability of a client device to call for an application, and the application components are streamed on demand from the server to the client. All application execution happens on the client. In order to ensure that multiple streamed applications executing on the client do not conflict with each other, most of these products also employ some sort of virtualization or isolation technology that allows for different applications to execute on the client without conflicting with each other.

The most popular product in this space is Softricity’s Softgrid. Citrix has also announced something called Project Tarpon, although this is not a real product yet.

Softricity can be used to stream applications down to end user client devices. This can even happen outside of the firewall via a web portal that pushes down the Softricity agent and the applications to a new client computer.

Pros

  • A single applications package can be streamed to any client—end user devices or Citrix/Terminal Servers
  • No application conflicts
  • Clients can use the applications offline

Cons

  • You have to sequence / package all of your applications
  • Application communication across virtualization partitions can be tricky
  • Some applications won’t work.

Option 5. Operating System Streaming

Use something like Ardence to stream the entire disk image (OS and all) to the user’s client device. Explaining exactly how Ardence works would require an entire paper (coming soon), but the quick version is this: Ardence provides a network-based block-level disk redirection that redirects physical disks in client computers to virtual disk images sitting on network file servers. Client devices can each have their own vdisk files via 1-to-1 mapping, or multiple clients can share a single vdisk file.

A client computer boots to the network and the Ardence server recognizes it based on its MAC address and mounts the appropriate vdisk file.

Pros

  • Rebooting a machine resets everything back to the “gold” state
  • A single computer can do different things (by mounting different disk images) each time it boots. (Your Citrix server can become a backup server by night. Your receptionist’s PC can become grid node by night.)
  • Makes use of the power of all your PCs
  • Any client computer can act in any role

Cons

  • Must have network connectivity for this to work

Option 6. Bladed PC

Install Windows XP on a server blade and then provide remote access via XP’s built-in RDP remote desktop functionality. Clients connect from thin client devices. In most cases you could store the users’ disk volumes on a SAN and mount them on-demand to the particular blade that the user connects to. (I've written about Bladed PCs in the past.)

HP has a product in this area called “Common Client Initiative (CCI).” In this case they can run multiple users off of a single blade, and they use special blade hardware that’s custom-designed for running Windows XP.

Pros

  • No application compatibility issues.
  • Better / easier security.
  • The clients run the "workstation" version of software.
  • Users have more control over their individual desktop.
  • Easier backups.

Cons

  • Must have network connectivity for this to work.
  • Management tools are needed to manage the software within each bladed PC.

Option 7. Centralized VMware PC

Build a huge VMware server and divide it into individual VMs, with each VM running Windows XP. Provide remote access via XP’s built-in remote desktop. Clients connect from thin client devices. (I've written about this in the past.)

This is a lot like Option 6, except this option has each user going to a Windows XP session in a VM instead of to their own native blade. Using VMware provides several options, mainly, that VMware sessions and be suspended (and unloaded from memory) and resumed later on (much like hibernating a laptop). For instance, imagine that after a 30-minute idle period, the user’s session is disconnected. Their session would remain active on the server for four hours, but after that the virtual machine is suspended and its memory contents dumped to disk. At this point that VM is consuming no server resources and can stay suspended forever. When the user finally comes back (even after several weeks), their session is just retrieved from disk and restored to any available server, and the user picks up right where they left off.

This will be really cool when Citrix gets involved. Even though they haven’t announced anything, you know that Citrix has to be thinking about this. For example, right now this technique involves a user connecting to a Windows XP workstation via RDP and the workstation’s built-in remote desktop functionality. This is a perfect opportunity for Citrix to come in with ICA, and to let the users connect to their desktop VMs via ICA.

Also, and more importantly, Citrix could provide the crucial middleware “glue” to hold all this together. This method requires that a user authenticates, and then the system has to figure out which VM host has capacity, find the users virtual disk files, fire up a VM using those files, and then connect the user via ICA to that VM. This is complex. However, Citrix has experience managing all of these issues in a Presentation Server world, so I would imagine it’s not too difficult to apply this technology to VMware. Look for major announcements from these two companies in this space.

Pros

  • Better performance.
  • No application compatibility issues.
  • Better / easier security.
  • You can "suspend" individual VMs and then move them from server to server.
  • The clients run the "workstation" version of software.
  • Users have more control over their individual desktop.
  • Users can take their sessions with them when they go offline (by “checking out” their disk images and running them on a local VM on a client device (like ACE).
  • Central backups.

Cons

  • A lot of server hardware is required.
  • Management tools are needed for the desktop VM.
  • Good luck if you want to do this today

Option 8. VMware Clients within Terminal Server / Citrix Sessions

Build a server and install terminal services and Citrix. Install VMware Workstation (or Microsoft Virtual PC) as a publish application in Citrix. Then “publish” a VMware disk image for each user. Users connect to the published VM via ICA. (Citrix has some articles about how to configure this.)

Pros

  • No application compatibility issues.
  • Better / easier security.
  • You can "suspend" individual VMs and then move them from server to server.
  • The clients run the "workstation" version of software.
  • Users have more control over their individual desktop.
  • Users can take their sessions with them when they go offline (by “checking out” their disk images and running them on a local VM on a client device (like ACE).
  • Central backups.

Cons

  • This seems like a stretch but it’s the best we can come up with now
  • One instance of VMware in each session? I wonder how that performance is?

Option 9. The Future

What’s the problem with today’s applications? They’re still running on a basic architecture that’s more than 20 years old. Current Windows applications are designed to run one at a time, within the walls of one single box, and with one (local) user interface.

Web applications do a great job of running on multiple servers, and a great job of separating the application execution from the user interface. Their main downside is the fact that they’re web apps and not Windows apps.

The Future: Terminal Server and Internet Information Server merge into a single product (called Microsoft Application Server). Applications execution components can execute on whichever backend systems they need and presentation components can be displayed and consumed wherever they are needed. The OS runs on the network and not a single computer.

Pros

  • This Rocks

Cons

  • Okay, it’s not “technically” possible (yet)

Combining Multiple Options

Of course the nine options presented above are not mutually exclusive, and they’ll change and evolve over time. For example, think about running Citrix servers in VMs. This would allow you to provide users with policy-based ICA access to individual applications while still having server flexibility on the backend.

Think it’s not possible for performance reasons? Simply wait another year or so until Microsoft releases Longhorn server with its built-in VM hypervisor and all Intel and AMD CPUs support hardware-level virtualization. (Imagine the advantages of virtualization without the performance penalties.)

Imagine using Softricity to stream applications to Citrix servers. This would allow you to provide users with policy-based ICA access to individual applications while still having server flexibility on the backend, AND you won’t have to install any applications onto any servers.

Now add Ardence to the picture to stream the Operating System to the server, and continuing using Softricity to stream applications to the server, and use Citrix to provide policy-based remote access to those applications. Go nuts and put this all in a VM!

The bottom line is that we’re moving into a world where any application can be executed on any backend server, and the user interface will be built for the user regardless of where they are. Some of the ideas in this article are just theoretical, but many are real today.

Did I get this right? Did I miss any pros or cons or did I miss a whole section? Share your thoughts below.

 
 




Our Books


Comments

Guest wrote COST and Complexity
on Tue, Mar 14 2006 1:57 AM Link To This Comment
I
Guest wrote COST And COMPLEXITY
on Tue, Mar 14 2006 2:06 AM Link To This Comment
Citrix should have bought VMWARE or Vice Versa. 
 
I love all this innovation but it reminds me of the time when I was using memmaker to make room for my PC GAMES.  Its just the same old architecture but with a whole lot more complexity and cost.   
 
Luis
Michel Roth wrote Thin Trough Networks
on Tue, Mar 14 2006 2:40 AM Link To This Comment
Hi Brian,

Great article. I agree wholeheartedly that grid-computing is the next step from an IT evolutionary standpoint. What we are doing today (roughly options one trough eight) relay on, as you said, a 20 year old "concept". Somewhere along the way this will change. Whoever does this right first will be MS#2 (Google, anyone?)

However one factor of today's infrastructure will continue to grow tremendously in importance: networks. Connectivity will become an even more crucial resource for the future. Not just the Internet, but every kind of (yet to be created) network. This is reasonably well taken care of in cooperate environments but is still in it's infancy in MAN's or home networks.

Finally with grid computing being business as usual in cooperate environments and having reached the "iPod-level" at home, I can finally buy that neat little device that makes origami looks like MS-DOS, that's all I need. Thin Clients won't be called Thin Clients. Just Clients...


Michel Roth
www.thincomputing.net




Guest wrote Hardware / license costs ?
on Tue, Mar 14 2006 2:44 AM Link To This Comment
Is it possible to add hardware and license costs to this comparison, to get a more realistic view?
And what about network bandwith / performance for the streaming options?
 
Richard Thompson wrote Question of time...
on Tue, Mar 14 2006 7:00 AM Link To This Comment
No doubt your scenario on "The Future" will happen, it is just a question of time...
 
Exciting times lie ahead for us IT Professionals...
Tim Mangan wrote Three for the future
on Tue, Mar 14 2006 8:56 AM Link To This Comment
I see three different things happening in the "future" area.
 
First is .net remoting.  This becomes popular (at least for a while) because of viral deployment by folks like Goggle.  Web apps don't have to suck.  With .net remoting, components can run in the best location for them - rich user experiences can happen.  Users don't care where the components are, as long as they can bang a URL and get what they want.  This frees IT up to deploy as they want (a good thing), but without an infrastructure or management capability they are free to screw it up badly and not even know it (until the screaming starts).  If this future takes hold without a way for managing it, grab the middle lifeboat seat because the water is going to be rough.
 
Second is grid.  Grid provides the infrastructure and thus has appeal to IT.  But apps have to be written for that grid and there lies the weakness.  Without a market you get no apps, without apps you get no market.  IBM, et al, have a good start.  Should they manage to look like they are going to make it to a market we may still see the opportunity die due to the emergence of competing grid infrastructure options.  (Would you put it past Microsoft to announce and release a platform just to kill another vendors market?)
 
Third is a shift from apps to data. Periods of innovation need to be followed by consolidation.  For us that might mean a consolidation into OS/application deployments that are simple and safe. Ordering up an OS/App infrastructure via imaging really is a natural to happen here (as wasteful as it is). Brian listed this in the today world, but it only works when we can separate the "data" and "personal experience" side from the image.  Given the compelling need to manage the data (insert you favorite government regulation here) separation of it into a managed SAN'd box is happening.  I like what Microsoft is doing with Sharepoint 3, but nobody else seems to agree. Maybe the SAN guys make a play.  Maybe someone else.  But given the ability to separate out and manage the data (who touches what and when), separating out the "personal experience" from the OS/app image means that what IT does today is as simple as placing an order. ("I'd like an OS with Office".  "Would you like fries with that?").  So IT will now concentrate less on the apps and more on what is valuable - the data and keeping user productivity high.
 
Man, what was in that coffee today?
 
tim
Guest wrote Some thoughts
on Tue, Mar 14 2006 3:44 PM Link To This Comment
I think that an "ideal" solution is one that assumes a heterogeneous environment where multiple application distribution methods are used. For example, application publishing with seamless windows makes the most sense when locally executed apps are also used. Otherwise, why not just publish the entire remote desktop? It seems a wast not to take advantage of the powerful servers, powerful clients, good network infrastructure and the virtualization technologies which are all readily accessible these days (relatively speaking).
 
One thing that has always bugged me about application streaming is application licensing. Seems to me that the only applications that can be streamed without violating their license agreement are open source apps, or apps that you have a site license for.
 
Finally, in this context its worth taking a look a fat/rich/smart client technologies such as IBM's WMC.
Guest wrote D'Oh!
on Tue, Mar 14 2006 7:00 PM Link To This Comment
Forgot Web based and Java Applications!
 
David Caddick wrote New Paradigm?
on Thu, Mar 16 2006 7:50 AM Link To This Comment
Great article Brian, and neatly touches on what is starting to become a crowded playing field?
 
I think Michel and Tim have both raised valid points about how this architecture may come about, but for my 2 cents worth, one of the reasons that contunually forces us to choose a particular path or technology is the Applications we are trying to deliver?
 
Brian also made a very valid point at the start of the Article when he pointed out that IT only exists to provide the Applications to the Users!
 
Our efforts are all about trying to deliver exactly that - we all live and breath in this vast middleware, where we are always trying to cobble together a *solution* that meets the needs of the Customer and what they ultimately want is to provide Applications X and Y to users at A thru to H, as simply as possible while maintaing security and functionality!
 
On the surface of it this isn't too hard, but then we start totting up all the specifics of the Customers environment that we have to work in, then we look at the specifics of the main Business Application we are trying to deliver and now we find that our lovely *solution* has been severly constrained to one or two options?
 
What really gets me is that we are constantly still dealing with Applications that are embedded back in Access2 Runtime bullit. We might be ready to deliver the brave new world, but we are still going to be tied to crappy old Applications for sometime until the Developers at least start making an effort to keep up with technology and at the very least provide Applications that go some way to conforming with API Best Practices?
 
/RANT OFF 
 
I'll have some of Tim's coffee? Might be better for me? 
Guest wrote Not very realistic
on Thu, Mar 16 2006 9:59 AM Link To This Comment
I must say I had a good laugh when reading your options 7 & 8.
But then I read on, and saw that you actually summed up some 'Pros' for these options.
However, in my opinion, both options are completely useless ; you just shift the OS from the local desktop to a VMWare environment, and change nothing about it.
So, it's in fact the exact same as the starting position where you've got unmanaged desktops.
They're no longer decentral, but still unmanaged.
You'd still need SMS, or SoftGrid etc to manage the apps.
 
However, the major problem with those is the CPU power that's lacking.
A simple desktop runs on 2 GHz, and has 256 Mb memory.
A normal desktop runs on higher speed and has more memory.
Desktop memory/CPU is less scarce and cheaper than server memory and CPU, not even taking the OS restrictions (memory limits, number of CPU's etc) into account.
So, centrally running 4000 VMware desktops will be very pricey compared to running the OS natively on 4000 el cheapo Dell PC's.
Or do you really think that a company can just as easily afford 2 TB internal memory and let's assume a 3600-CPU VMWare system, as they could buy 4000 Dell PC's ?
I don't think so, and I also think that this pinpoints the exact problem that 'Presentation Services' and 'Server Based Computing' is facing at the moment.
 
Citrix is shifting from Presentation services towards Access Strategies. Instead of trying to make the apps run on a central piece of hardware, they're thinking about providing access to the applications.
Application developers are doing the same ; Microsoft shifts towards what they call 'Live!' ; it's a different approach to the 'thin client' methodology, where instead of moving the client application towards a central environment, the client application is kept 'stupid' by means of web technologies, such as AJAX, and tier-3 approaches in .NET development. All you need is a browser, and a connection, and you can do your work.
What's the use of _not_ using the local desktop for that, if you have got it available anyway ? A thin client device is often just as expensive as a simple desktop...but a little more versatile when it comes to local application usage such as multimedia.
 
The next couple of years we'll certainly see a shift in application development where the windows API will become less and less important. Web Api's is the way forward, as Google proves with GMail, and as Microsoft shows with Office Live. Most software companies (including SAP) have web-frontends to their business apps.
 
When a couple of years ago 'allways connected' was the future, it is common practice nowadays with DSL, WiMAX, WiFi, UMTS and 3/4G mobile communication.
 
Applications that are installed locally nowadays are often installed locally for a reason ; either it's hardware specific reasons (DirectX support, hardware video accelleration, sound card capabilities or DVD burning), or simply because it's a non-CS application.

Device management by means of SMS, RES Wisdom and/or Powerfuse is becoming easier every day, nibbling at the share of Citrix when it comes to application management. At the end Citrix is just an expensive method to achieve the same (functional) endresult with less performance.
 
The same goes for VMWare ; okay, if it comes to servervirtualisation it's true, but virtualizing actual desktop environments in VMWare is downright useless. It's like putting 4000 Dell Ferrari's on a big @ss VMware train, and saying it works faster and better that way.
Either loose or use the 'desktop'.
Just my 2 cents...(maybe a bit more).
 
Regards,
 
Marcel Göertz - http://oxle.com
Guest wrote RE: Not very realistic
on Thu, Mar 16 2006 12:18 PM Link To This Comment
I agree with you as far as it goes about complete centralized companies.
But what if your people are working everywhere?
People these days work everywhere and they aspect that they can do all the things what they do at the office.
 
At this moment people still need destops to work behind so anyway you have to provide this to them.
From this it is only a a small step to make this a virtual one. And from here, you just needs some creativity to provide this virtual desktops to your remote users.
If the user has an internet connection he can connect with rdp or ica to the virtual desktop with all the functionality that he wants.
 
Your users are happy they can do all the things as in the office when he is on the road or in a small sub location.
Your support department is happy because they can give the same level of support to the remote user as to the local user and they don't need any extra skills.
 
The future is web based but not in this decennia. 
Guest wrote Altiris SVS
on Fri, Mar 17 2006 8:27 AM Link To This Comment
Hi,
 
I've really enjoyed reading your article.
 
I think one product is worthy to be mentioned in your option 4 though: Altiris Software Virtualization Solution.
 
SVS is completely running at the client side but does allow for centralized management.
I think in the (emerging) application virtualization segment Altiris with SVS will definately become a key player.
Guest wrote It's great to see a debate like this!
on Sun, Mar 19 2006 8:46 AM Link To This Comment
I think its great that you are going beyond Citrix and debating the different options available to us. I think the post is really useful and the comments make it even more so.  I have responded to your post with some comments on my blog http://steves.blogharbor.com/blog/_archives/2006/3/16/1824667.html here and I reposted an old article that covers simillar ground but is still useful as well http://steves.blogharbor.com/blog/_archives/2006/3/19/1828799.html.  On a related topic I have also recently discussed the importance of helping people make sound decisions on IT related topics here http://steves.blogharbor.com/blog/_archives/2006/3/16/1824680.html.
Guest wrote RE: New Paradigm?
on Mon, Mar 20 2006 10:42 AM Link To This Comment
ORIGINAL: WallabyFan

Great article Brian, and neatly touches on what is starting to become a crowded playing field?

I think Michel and Tim have both raised valid points about how this architecture may come about, but for my 2 cents worth, one of the reasons that contunually forces us to choose a particular path or technology is the Applications we are trying to deliver?

Brian also made a very valid point at the start of the Article when he pointed out that IT only exists to provide the Applications to the Users!

Our efforts are all about trying to deliver exactly that - we all live and breath in this vast middleware, where we are always trying to cobble together a *solution* that meets the needs of the Customer and what they ultimately want is to provide Applications X and Y to users at A thru to H, as simply as possible while maintaing security and functionality!

On the surface of it this isn't too hard, but then we start totting up all the specifics of the Customers environment that we have to work in, then we look at the specifics of the main Business Application we are trying to deliver and now we find that our lovely *solution* has been severly constrained to one or two options?

What really gets me is that we are constantly still dealing with Applications that are embedded back in Access2 Runtime bullit. We might be ready to deliver the brave new world, but we are still going to be tied to crappy old Applications for sometime until the Developers at least start making an effort to keep up with technology and at the very least provide Applications that go some way to conforming with API Best Practices?

/RANT OFF 

I'll have some of Tim's coffee? Might be better for me? 

 
Imho IT exists to allow superior access to information. You could argue that applications are just a method of viewing/manipulating that information.  I know what you are saying but infrastructure costs are smalltime and relatively easy to transition compared to applications/datastores, and I don't believe that any one of us has ever delivered an Access infrastructure that exactly conforms to MS/Citrix best practices :)
 
Kind regards
Drew
Guest wrote RE: Not very realistic
on Tue, Mar 21 2006 4:26 AM Link To This Comment
Marcel,
 
   what you say is appealing but then ...... you wake up all swet ..... ;-)
 
You talk about the future (AJAX, web applications etc etc).... this is all about fixing the present mess.
 
You give credit to server virtualization but there are many people that think that virtualizing thousends of OS images instead of installing thousends of physical servers does not buy you much since you still need to manage the very same sw mess. True. But does anyone have a better idea ? Can you collapse thousends of applications onto a couple of "Windows mainframe" with a couple of OS images ala mainframe ? I guess not hence .....
 
The story on the PC is not any different. Of course using virtualization to fix the problem is a by pass but yet it is the best option today for the applications of today. We can talk about AJAX and all those cool technologies but the matter of fact is that we have customers that are still running Win98 desktops not becuase they like it but because they have applications that can't be put elsewhere. Of course this is an extreme example but ..... you know what I mean.
 
As per the desktop power you described .....
>A simple desktop runs on 2 GHz, and has 256 Mb memory.
>A normal desktop runs on higher speed and has more memory.

True... but you failed to mention how much of those resources are actually being used across the clock (24 hours). If you are lucky (really lucky) each end-user will be using on average 15-20% of those resources for some 8 hours out of 24. So you actually don't necessarily need to translate that overconfiguration onto the server if you want to virtualize your desktops. And this is on top of other pros such as easy D/R (we had a customer that was interested in doing this only because he needed to find a way to provide DR to their end-users working on XP desktops using "standard legacy applications").
 
Don't get me wrong .... I applaude you for your vision .... but there is a difference between a strategy and a tactic. This is a all about tactically solve an issue that strategically won't be solved very soon (I think).
 
Massimo Re Ferre' (king@it.ibm.com)
 
 
 
Guest wrote Number 9
on Fri, Mar 24 2006 8:28 AM Link To This Comment
I'm not really sure that number 9 is totally future, because there are a lot of apps being developed (not so many commercially, but many corporately) that are web delivered, but have rich content and controls like a fat application.  With newer development technologies like .net, swing and flash, an application can be delivered via a web browser, but still use a blend of server and client exection, deliver true rich content and can use fat client-like controls, without requiring client side installation, providing the benefits of a web app (multi-server, high-availability, scalability, instant upgrades) without the sacrifices of a "classic" web application.

As devices become more robust, this type of content can be delivered to standard Windows desktops, mac's, linux and Unix workstations, via terminal services, on a PDA, through a thin client.  The possibilities are limitless.

The unfortunate reality of this technology today is that it requires clunky plug-ins or frameworks and it often ends up being interpreted to a great extent, which causes performance to suffer.  This technology also requires connectivity all the time, which is not a requirement of traditional fat applications.

We are still a good distance from a utopian computing environment, but we are much closer than we were as little as 3 years ago.
Guest wrote RE: Altiris SVS
on Fri, Mar 24 2006 10:28 AM Link To This Comment
I agree. I spent all last night and again first thing this morning configuring our nTier application, db, app+n and TS/Citrix+n, for SVS and found that it works great.

I use:
 SQL2000 SP4 on Windows 2003 SP1
 Windows 2003 SP1 as our Application Server, aka middle tier.
    Has the following services:
       COM+ Application which gets exported as an App Proxy for the Terminal Server in this case it is used wit the SVS Layer
       ODBC System DSN's to our SQL2000 Server
       DCOM vairous objects we develop for our application
 Windows 2003 SP1 Term Server App Mode
 This is the client portion of our Application it receives the Application Proxy from the Middle Tier and our Client forms and such
    In my Test I used Windows 2003 SP1, TS in admin mode, and deployed our client portion including the app proxy using SVS 2.0


It took me a while to figure out how to get the App proxy to work with the Layer I built but it is sound as the Pound now!

Regards,

Noah Pullen
unicaresys.com
Keith Daniel wrote Softricity - Worth Mentioning
on Fri, Mar 31 2006 4:48 PM Link To This Comment
I thought it worth mentioning that in the Softricity world users can run streamed apps when they are not connected to their network (that is if they have already streamed the app once).   This is because the app gets cached locally.   That's a nice capability for companies with a lot of traveling users since you don't need locally installed apps, or access to the network to run the apps.
Kyle Joh wrote The future
on Mon, Apr 3 2006 10:35 AM Link To This Comment
I think the future is a combo of the new virtualization methods.  Gone are all the fat and redundant processing.

Smarter Thin client that can decode and display not just display data but process stream data.

For example a Citrix/Vmware like combo client that can display the session but process application stream delivered by something like Softricity.

The problem with today's thin solution is that it is not truly thin, you have FAT servers serving up the FAT apps.  If we can reduce the weight of the apps on the servers, they can run lighter and handle more apps and offload some processing to the thin clients as well.

This used to be the concept of JAVA language but it was poorly conceived.  Hopefully someone combines the power of Citrix and VMware into one client with a good application translation stack.
Guest wrote RE: Not very realistic
on Mon, Apr 3 2006 11:37 PM Link To This Comment
Hi there, very interesting reading metirial, you have defently defined in words what is pretty much
ovius when server based computing is involved. I am the chief architect of a company that has faced these
same issues. We have identified that server computing has a great benefit in providing a work anywhere scenario
and have raised the same issues you raise in your coresponding. As a result we have came up with a metodology
that combines to worlds together the desktop development metodology and the web deployment metodology which
is basicly URL navigationing to a web application. The development enviroment that we called WebGui provides a desktop
development (currently WinForms over .NET - future support for java) with an underlining projection through AJAX to the browser.
This means that we as aposed to remote desktop based sollutions provide less server CPU use by keeping struct like objects that holds the
application state while projecting a AJAX based application to the browser. This means that WebGui based applications opposed to
standard remote desktop scenario do not use resources for presentation layer. You can find out more about WebGui in this video and in our site . you can download WebGui community edition if you are a .NET developer (www.visualwebgui.com)  and see for your self.
 
 
guy.peled@visualwebgui.com
Guest wrote RE: Not very realistic
on Tue, Apr 4 2006 11:01 AM Link To This Comment
Here's a headline that's "Not very realistic":

Company realizes that all of their desktop applications are completely crap and embarks upon project to rewrite all 1,000 of the applications in Web 2.0.

AHAHAHAHAHAH LOL AHAHAHAHAHHA... I think I just burst my spleen.

Shawn
Shawn Bass wrote RE: Not very realistic
on Tue, Apr 4 2006 11:01 AM Link To This Comment
Ooops, that was me.  Didn't mean to be anon.

Shawn
Guest wrote Newtons law
on Sat, Apr 15 2006 5:15 PM Link To This Comment
I've been working as an administrator for about 12 years now. For what i've seen i'd say that the biggest problem in system administration is not to much the demands of the administrator to retain a controllable environment, we'll allways find a better way of doing this. but the demands the users lay on the environment that is proposed to them. Technically speaking. i've worked in evironments where most production applications fully resided on the server and the user side installation was on demand. this meant that most applications had to be extensively researched as automation tools lacked that seperated user shell setup data, system related setup data and collateral data. and ofcourse after installing that machine leaks the application to everyone. i've also worked in an environment where most workstations had mostly all applications installed on initialisation. after a total crash a new mac adress was added and the new system self installed meaning just one terminal was lost. where only based on extremely tight security privileges the golden state was preserved. last but not least i've worked in an environment where we had a genius who built a network login shell that recreated an entirely personal windows installation and all the applications "PER USER" "PER APPLICATION" "PER USERGROUP". this is ofcourse very time consuming.. (and the mans research was idiotic.) but isn't all the remote boot time consuming?
 
For what i see, noticing the statement that applications should work "like" todays web apps. i get the feeling that a user should get a system that is totally isolated from the applications. it only executes application parts on demand and the guest system only touches/smells the applications interface on execution. no local user settings. no local data. just system.
 
But we're talking production here... not just fancy golden terminals. a user demands productional capabilities from his workstation. he demands state of the art application technology. a user demands extendability on demand.
 
Newtons law: A moving object wants to stay in motion as where a static object wants to stay in place.
 
Administrators desire the flexibility, the managability of the applications and the security and privacy of the production materials. as where the user is just concerned about the quality of his production itself. therefore the application interface you mentioned won't be replaced for a long while.
 
administrators (it-technology) are moving but users (applications) like to stay in place.
 
So my tech hint would be: Get the user in motion so we can get a move on!
 
(Very good article!)
 
I've thought about programming an application shell (bootstrap).. Where the host os only interacts with the shell and the application itself would function like a vm and all application data could reside on whatever medium. a vm just for the app not for the entire os. lets call it a "virtual (protected) application". benefits would be that applications could be suspended and later resumed without having to reside in memory or workspace being shut down in the process of suspending. it would'nt matter where the os retrieves the app. think about suspending your work on one terminal and resuming it completely in no-time on another. but the windows api catches you'd have to make are too immense to even glance at. better rework your os billy boy!
Guest wrote Reluctant Windows Admin
on Wed, May 3 2006 5:45 PM Link To This Comment

I read this article with rueful chuckles.

I am only reluctrantly a windows admin. I cut my teeth working
with unix variants, and while the solutions there are not total,
most of them have been in place for decades:

I think the equivalent of a 'streaming application' is
an application mounted on a network share. If a unix application
needs an extra library ( DLL ) it is usually in the application
folder, not in the OS folder.

Copy an application to a network mounted folder, and all users
who have read access to that folder and who have execute
permission on the application can use it. No registry entries.


X-windows allows an application to be hosted on one computer
but displayed on another.

I've been trying for years to get this sort of functionality in
Windows. Sigh.
Guest wrote Wise up
on Thu, Jun 15 2006 6:46 AM Link To This Comment
Oh please guys, do yourselves a favour and get a proper server from IBM, like an AS/400.
 
Hypervisor? partitions? VM? virtualisation?
 
I created my first virtual machine in 1985, and it ran an application with 300 users. (and Microsoft are only planning to release a true multi-user operating system soon? 21 years later?)
 
And you know what? the AS/400 has never been infected with a virus. Yes that's right; NEVER
 
IBM have been building business servers for 50 years, they know what they're doing, Microsoft are still learning the ropes.
 
Guest wrote VMware ACE
on Mon, Jun 19 2006 6:31 AM Link To This Comment
Check into the VMware ACE server. Thats a cool app for the kinda use you're talking about here.
Mads Soerensen wrote Can we get some drawings? :-)
on Thu, Nov 9 2006 3:25 PM Link To This Comment
Very nice article but I could really use some Visio drawings or something like that. It’s hard to imagine how the traffic on the network will perform, and how it will acct between the servers and clients.

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.