A decade of server-based computing: how we got from WinFrame to the virtual, streamed, VDI, XenApp world of today - Citrix Presentation Server 4.5 Book - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.

A decade of server-based computing: how we got from WinFrame to the virtual, streamed, VDI, XenApp world of today

Written on Feb 17 2008 19,428 views, 9 comments


by Brian Madden

I'm going to go out on a limb and guess that you probably didn't choose to get involved with this "Citrix" thing on purpose. Do you remember the first time you heard about Citrix, or server-based computing in general?

For me this happened in 1997. I was working on the helpdesk of an electronics company in Akron, OH, and we had this helpdesk application (QSupport) that was painfully slow over a WAN connection. (We had a remote helpdesk across a frame-relay link in Houston.) It would literally take 30 seconds from the time someone clicked on the link in Houston until the window opened up on their desktop.

A remember rolling my eyes when the Citrix sales reps visited our IT department that hot summer day, trying to convince us that WinFrame was the way to go. I had built a Windows 95 PCAnywhere-based remote access solution for a law firm a year earlier, and nightmares of that were still fresh in my brain. I just couldn't believe that this "multiuser PCAnywhere" solution would actually work.

Fortunately the people I worked for had more foresight (or more guts!) than I had, and we bought a 10-user license of WinFrame and set it up on a Cubix server in our datacenter. It turned out that this "Citrix thing" actually worked, and it worked well. We only had about 30 desktops that needed access to the helpdesk app. The Houston users all accessed it via Citrix, and the Ohio users all had it running locally on their PCs.

Updating this the client-side components of this app was a nightmare. It was a tedious process, and it seemed that as soon as we got everyone everyone in Ohio updated, we had to turn around and start the process again with brand-new updates. But this was an Ohio-only problem. The users in Houston didn't have this problem, because we just updated the single app on our lone WinFrame server, and in one fell swoop those users had instant access to the latest app.

So even though we bought this "Citrix thing" as a last-ditch effort to make our bloated helpdesk app run over a WAN, we sort of realized we had a "bonus" benefit that was the ease of maintenance and patching. "You know what?" my buddy Pete said over a beer one night, "What if we use WinFrame for our users local Ohio helpdesk staff too?"

Clearly this guy was crazy.

"No. No.. Hear me out. How much time do spend updating that software? Thirty minutes per PC times twenty PCs here? What's that... ten hours per update? And what's a WinFrame license cost, about $40?" (Remember this was a long time ago!) Even though it seemed crazy, we ended up building another server to support the Ohio users, thus cementing "Citrix" as THE deployment mechanism for this app in our company.

"Man, we should use Citrix for all our apps. It's so easy!"

Like I said, Pete was crazy.

Server-based computing takes hold

Citrix deserves a lot of credit. It was Citrix that pioneered the first commercially-successful server-based computing add-on for Windows. If it hadn't been for Citrix, there would be no Terminal Services features in Windows today.

Throughout the late 1990s, and into the early years of this decade, server-based computing (SBC), led by Citrix (both technically and culturally), really took off. Citrix's messaging and positioning of the late 1990s touted four benefits to SBC, tagged with the acronym MAPS: Management, Access, Performance, and Security.

Management meant that SBC made it simple to manage applications--the fact that a updating a single installed instance could instantly update dozens of users. Access was about the fact that applications made available via SBC could be instantly accessed from just about any client device--even non-Windows clients. And these clients didn't even have to have the application installed. The Performance benefit was highlighted perfectly by our two-tier helpdesk app in Ohio. Using SBC, the executing application could be sitting in the same rack (or tower in those days) as the the back-end database. They could share the same ring! (Hah!) And finally, Security, was about what Citrix called "eyes only" security, meaning that your data never left your datacenter as only "pictures" of your data ever went across the wire.

Citrix pushed these four "classic" benefits big time in their early years.

From a customer standpoint, Citrix adoption almost always followed a typical pattern. At first people just didn't believe it would work, but ultimately they relented because there was one "bad app" for which they had no choice other than to use Citrix. Then after awhile the IT staff would start to think, "Hey, this isn't so bad," and maybe another application or two would be snuck onto the Citrix server. After a few years, a lot of companies soon found that they had several applications running on Citrix.

In 2001, Citrix released MetaFrame "XP," a 2.0-version that was the first version that could realistically scale up to support more than a handful of servers. Now companies who had "scattered" implementations of Citrix here and there could consolidate their servers and for the first time create an actual "Citrix strategy" for themselves.

I can't even count how many projects I did like this in the 2001-2003 timeframe. (And really that's what my first MetaFrame XP book was about.)

"Let's put all our apps on Citrix!"

Server-based computing was great! Management. Access. Performance. Security. As companies consolidated and built-in huge Citrix farms, a fundamental "flip-flop" took place. While the 1990s was about "let's deploy all of our apps the 'old way' via local installs on desktops," the early 2000s were about "let's deploy all our apps on Citrix!" Of course this was not technically possible, since SBC still had (and has) some major drawbacks for some use cases. (No offline use of applications, poor performance for graphically-intense applications, and the fact that some apps just wouldn't run on a Citrix server.) But really a lot of companies in those days made it their policy that all apps would be deployed via Citrix unless there was a technical reason that a particular app wouldn't work that way.

So companies ended up with two major (and non-integrated) application deployment mechanisms: The Citrix way and the old way. (I'm lumping all the automated software distribution tools like SMS and ZenWorks into the "old way," since at the end of the day those tools were still about locally-installed application running on desktops.

Citrix becomes a victim of their own success

As Citrix grew and people become more comfortable with SBC in general, companies piled more and more applications on their Citrix servers. At some point it became inevitable that a company would end up with too many different types of applications on a single Citrix server, and the regression testing needed to apply a simple patch outweighed many of the management benefits of going to Citrix in the first place.

Most companies responding by splitting their Citrix servers into logical groups (informally known as "silos"), with each silo hosting a different subset of the overall application mix. While the siloing of Citrix farms directly addressed the application compatibility and regression testing issues, it also introduced its own challenges. If not all apps were installed on all servers, what happens if one user needs two different apps on two different servers? Do you let that user connect to two different servers at once? If so, how do you handle their user profile? Or their data?

It was around this time that a company called Softricity entered the spotlight. Softricity had developed a technology during the dot-com boom whereby they could "isolate" an application that is running on a Windows system. They did this by installing a little software shim that sits in-between the application and the operating system. This shim effectively means that the app can only see a "clean" operating system--it thinks that it's the only thing installed. This isolation technology actually allowed applications that would normally conflict with each other to run side-by-side on a Windows computer in perfect harmony, since each application couldn't see the other application at all.

Softricity perfected this technology over several years in the early 2000s, also building a packaging technology that could "package" apps into single-file bundles that could be dropped onto a Windows computer and executed immediately, and a "streaming" technology, which allowed these packages to be transmitted to client computers in real time at runtime.

The Citrix world loved Softricity. Finally it was possible to collapse all of those "silos" that segregated a Citrix farm into one huge silo for all apps, since using Softricity meant that you didn't have to worry about the application conflicts that forced you to use silos in the first place.

The "two technology" solution continued, though, with companies forced to deploy and manage applications in two ways: via Citrix (and Softricity!) when server-based computing was appropriate, and via the "old way" of doing local installs when the SBC method didn't work.

Bombshell 2003: Softricity goes offline

The Citrix / Softricity balance worked well. The two companies were friends, and Citrix awarded Softricity with several ISV of the year-type awards. But in late 2003, Softricity dropped a bombshell. Previous versions of the product required that the Windows client computer running the Softricity-packaged apps maintained a network connection with the Softricity server. But in 2003, Softricity announced "offline mode," whereby applications that were packaged with Softricity could run on desktop Windows environments while they were offline.

Think about how huge this was. Probably the #1 drawback to Citrix SBC was that it didn't work offline. So you had to package your apps once for deployment to your Citrix servers for users who will connect via SBC, and you had to package your apps a second time for users who were going to get the applications installed traditionally on their laptops. But now Softricity was coming out and saying, "No, you don't have to do that. Just package your applications one time, in Softricity. Then you can use deploy that same package to your Citrix servers for SBC users and to your laptop users for people who need offline access."

Interesting!

As you can imagine, once people realized that you could package an app once in Softgrid for local and SBC deployment, they started to think, "Wait a minute. Why exactly are we even using Citrix? Let's just put all our apps in Softricity!" After all, Softricity gave you the management of applications. It solved some of SBC's traditional drawbacks, like offline use and compatibility with graphically-intense applications and applications that were usable in a multi-user environment like terminal server.

But that was only half the picture. SBC still had many advantages over locally-installed apps (even those packaged and delivered with Softricity). Softricity required a Windows client device. And since the application run locally, the code had to go across the network from the server to the client. (Even though that code could be cached, and even though the app could start running before all the code had been downloaded, it still had to get onto the client at some point.) And of course had Softricity existed in 1997, it would not have solved our Ohio-Houston two-tier application performance problem. In that case we needed SBC because we needed the application to executive in our datacenter near our database.

While Citrix and Softricity traded marketing jabs, more pragmatic architects realized that a "true" application delivery solution would involve a combination of both technologies. SBC was better for some use cases, while Softricity's application streaming was better for other use cases.

The "ultimate" solution would be a single product that combined both technologies. Softricity actually came pretty close to delivering on that vision. They had a product called "ZeroTouch" that fully integrated with Citrix SBC solutions and Citrix's web-based application publishing. ZeroTouch basically created a website where users could authenticate and then be presented with links to the various applications that they're authorized to use. These links could be for applications that were provided via SBC (i.e. Citrix), or for apps that were to be streamed to client devices can run locally in an isolation environment (i.e. Softricity). But the coolest part was the fact that these application links could also provide access to both methods of deployment, so the user, administrator, or system could figure out the best way to deliver that app for each user's specific connection scenario. ZeroTouch was the first true "multi-modal" application delivery platform, providing the right app via the right technology for the right use case. It was a cool product.

Microsoft buys Softricity

ZeroTouch never matured to it's fullest potential though, because in May 2006 Microsoft bought Softricity. They immediately made two drastic changes: First, they cut the price from about $150 to $30 per user. Second, they cancelled the ZeroTouch product. (Conspiracy theories abound to this day about why Microsoft cancelled ZeroTouch. No one knows for sure, but most people think it's because Microsoft wanted to focus 100% of Softricity's resources on creating a Vista-compatible and x64 version of the Softricity client agent. In 2006 Vista was just around the corner, and Softricity could solve a lot of the compatibility issues beta testers were running into trying to get Windows XP apps to run on Vista.)

Microsoft's acquisition of Softricity was significant for another reason, and that's because it drastically changed Citrix's plans in the area of "application delivery." Though we didn't know it at the time, Citrix actually tried to buy Softricity in 2003. Certainly Citrix realized the competitive threat that application streaming technology posed to their traditional SBC delivery method, but they also realized how great it would be if they could combine application streaming and SBC delivery into a single suite. To that end, Citrix announced "Project Tarpon" in 2005, a full-on competitor to SoftGrid. As Tarpon's release date in 2006 neared, Citrix announced that it would be released as "Citrix Streaming Server," a complement to Citrix's existing "Presentation Server" SBC product.

But May 2006 caught Citrix off guard. When Microsoft bought Softricity and cut the price by 80%, you can bet there were some nervous folks in Ft. Lauderdale (the hurricane-prone home to Citrix's corporate offices). How would Citrix respond to this move? Would they scrap the their own Streaming Server Product and just focus on adding value to Microsoft's Softricity, similar to what they'd been doing for years with Presentation Server adding value to Microsoft Terminal Services?

Again it's fortunate that I didn't work at Citrix, because instead of canceling Streaming Server, Citrix decided to add it in as a "feature" to Presentation Server. In other words, one single product from Citrix--Presentation Server--could delivery applications via software streaming that would run locally on workstations, as well as via SBC that would be accessed remotely via Citrix's ICA remote presentation protocol. Furthermore, Citrix would enhance their Web Interface so that it could provide a single portal, multi-modal interface of application delivery. And because these two delivery technologies where part of the same product, a single infrastructure, a single database, and a single administrative team could delivery apps with either technologies. In many ways Citrix achieved what Softricity couldn't--a fully integrated application delivery suite.

There was only one problem. By the time Citrix released their application streaming capabilities, Microsoft's Softricity technology was in Version 4, while Citrix's was a brand-new v1. If you were a customer in early 2007 who bought into this whole concept, what did you do? Go with the technologically superior yet non-integrated Softricity route, or the totally-integrated yet inferior technology route? Or did you wait until the dust settled and see what everyone else's collective experiences were?

The "V" word

While all this was happening in our "application" world, the Windows virtualization world was starting to mature. The early 2000s saw the release of VMware workstation, the first hypervisor and hardware virtualization environment to run on Windows that people actually used. After a few years of desktop success, VMware launched a server version of their product. Glossing over what I'm sure is a very interesting story and a book unto of itself, people liked VMware. After some initial hesitation, people started putting their production servers into VMware environments!

This was happening a world away from the "Citrix" world. The "VMware world" was made of the hardware and operational folks. Sure there was some overlap (most likely due to the partial insanity shared by both early Citrix adopters and early VMware adopters). But in general, those of use in the Citrix world "did our thing" while blissfully ignorant of what was happening in the VMware world. (I guess I can't say that's 100% true. We were aware of the VMware world because some of those crazy folks wanted us to run our Citrix servers on VMware! And some of us even did it!! And some of them even did it successfully!!!)

Once the VMware folks virtualized as much as they could in the datacenter, they started looking for other targets of opportunity. "Hey," they reasoned, "look at all these desktops out there. It must be a real pain to manage all of those. What if we built huge VMware servers in our datacenters and ran Windows XP VMs? The we could provide 'desktops as a service!' Think about it! Users could connect from any client? And we can easily upgrade all of our desktops since they'll all be in the datacenter!"

When the Citrix folks at these companies caught wind of this, their response was not quite as enthusiastic. "Um..." they said slowly, "you know, we already do this today. We've been doing it for years. It's called 'Citrix.'"

Of course the the big difference between this "Citrix thing" and this "VMware-based Windows XP in the datacenter thing" was that the Citrix folks could get 75 user sessions on a typical $3,000 server, while the VMware folks could only get around 15.

One of the funniest things a Citrix person could do in 2005 was to visit the user forums on vmware.com and read about the VMware folks trying to do VDI. "How do we support multimedia apps?" "How do we print?" "How do we know what server we're on?" They asked and discussed all sorts of questions that those of us in the "Citrix world" had solved years ago.

While my personal perspective is that of a Citrix guy, the VDI approach certainly had some advantages over a Terminal Server-based solution like what Citrix offered. First and foremost was that because VDI was based on Windows XP instead of Terminal Server, it just "felt" more comfortable and familiar to rank-and-file desktop folks. You didn't have to deal with applications that wouldn't run on a multi-user Terminal Server. You could give users admin rights on their desktops.

Then again, by going to a VDI model, what did you really save? If you brought 1000 desktops into your VDI environment, you still had to manage 1000 instances of Windows. You still had to deploy applications to them and to patch them.

Citrix embraces VDI with "Project Trinity"

Around the time that VMware folks were starting to grapple with the realities of VDI (and around the time they were figuring out just how to do this stuff), Citrix announced something called "Project Trinity." Trinity was Citrix's first attempt to address the VDI space. The primary difference, though, between Citrix's approach and other VDI vendors' approaches was that Citrix was combining three different desktop deliver technologies: shared Terminal Server-based desktops (i.e. Citrix Presentation Server), VMware VDI-based environments, and individual PC-blade based environments. Fundamentally, Trinity was about providing desktops via server-based computing. But what was the best back-end to support that? Why should a customer have to decide between VDI (a single-user Windows XP-based solution) versus Terminal Server? Each had their own advantages and disadvantages. So while VMware was out there beating the drum that desktop delivery via VDI was better than TS, Citrix was pushing both.

Citrix ultimately announced and released a product called Citrix Desktop Server. It was a primitive v1 product, but it did combine all three desktop delivery methods--VM-based, blade-based, and TS-based desktops--into a single product with a single management console.

What to do with all these disk images?

VDI has another inherent problem that doesn't exist in Terminal Server environments. When you have a single Terminal Server running 75 user sessions, all 75 users share the same disk image. (In other words, you only install and manage a single copy of Windows that's shared by all 75 users.) But what about a VDI environment for 75 users? 75 users running 75 virtual machines needs 75 disk images. And while it's possible to use various disk image cloning and snapshotting techniques, once your disk images have been provisioned, you have to manage them just like any other installation of Windows. This means dealing with viruses, application conflicts, and patch Tuesday, just like you've been dealing with for years.

A more ideal solution would be to manage all of your VDI instances via a single disk image. In other words, you need a way to "share" a single image between dozens, hundreds, or thousands of VMs. How's this possible?

A small and relatively unknown company called "Ardence" solved this problem in an interesting way. Ardence was a software company who had a product that essentially lets computers mount their main boot drives from a disk image file across a network instead of having a local drive installed in them. What was really unique about Ardence was that several (even hundreds) of computers could mount the exact same disk image simultaneously. Ordinarily if you take a hard drive of a Windows machine and share it between a hundred computers, you'd have all sorts of conflicts: computer names, domain security identifiers, drivers, etc. Ardence solved this problem by inserting a driver between Windows and the disk image, and then pulling the unique identifiers for a particular from a database instead of from the disk. (A much more detailed explanation of Ardence is included in the provisioning and deployment chapter of this book.)

So with Ardence, you can let hundreds of VDI Workstations share the same disk image. This is great in terms of management, because on patch Tuesday you only have to patch a single master image instead of trying to patch one image for each VM. Of course there's a downside to this, namely, that because all clients share the same image, you cannot personalize the disk for any one users. (I mean sure, you can make a client writable, but now you're not using the single master image and arguably negating one of the main reasons for using Ardence in the first place.)

Cold versus Hot images

Imagine you have a Windows workstation that re-imaged itself every time it booted. We would say that workstation is using a "cold" disk because it's reset to scratch each time the computer restarts. In a sense, this is was Ardence is doing when multiple workstations (physical or virtual) share the same image. Since the master image is not writable, the workstations "remount" the read-only master image each time they're powered on. The disk image is "cold."

A "hot" image would be like a normal hard drive locally installed in a workstation. If you change the some files on the disk and restart the computer, the changes to the disk are maintained. Hot images are more natural for people to understand, but they're harder to support since each workstation has its own image.

Getting back to the cold images, a lot of people wonder how Ardence could possibly work in a VDI environment since it would mean that the machines would have to use cold images. Is this acceptable to users?

The reality is that people have been using cold images for years in the form of Terminal Server and Citrix. Remember that one server supporting 75 users? In a sense, that server is serving up "cold" images because each and every user connects and runs a session from the exact same Windows image on the server. So how do people make this work? They just use the server image as a baseline for the user's environment. Then they apply a series of "live" customization at runtime. This can be things like loading roaming profiles, establishing Windows folder redirection, loading streamed applications, and dropping Program Neighborhood Agent shortcuts and links on the desktop. So even though 75 Terminal Server users end up sharing the same cold image, you can end up with 75 very different and completely personalized user environments. (And your users can even save settings and retain their unique environment from use-to-use.)

So TS admins have been "warming" cold images for years, and the exact same techniques can be applied to VDI environments where hundreds of workstations share the same cold image.

Citrix buys Ardence

Ardence's technology really was amazing, and once people realized there are relatively straightforward ways to address user personalization in the shared image environment, it seemed that everyone wanted a piece of Ardence. In December 2006, Citrix beat out several other companies and bought Ardence. This was a huge win for them because none of their other competitors (not even Microsoft) had technology even close to what Ardence could offer.

Ardence continued to work on their products as a more-or-less independent subsidiary of Citrix. The main change was that Ardence's primary product was renamed to "Citrix Provisioning Server."

Microsoft gets serious about virtualization

Meanwhile the hardware virtualization world continued to heat up. Microsoft's first attempt, a product called "Microsoft Virtual Server" (based on technology they got when they acquired Connectix) was more-or-less a failure. Virtual Server was a nice toy, but it the hypervisor as an application on top of Windows Server. VMware's ESX product, on the other hand, ran the hypervisor on bare metal, which ultimately led to much higher performance and fewer reboots. (Imagine patch Tuesday with Virtual Server. You had to reboot your entire host!) Even though Microsoft didn't really want to embrace virtualization (Don't move my cheese!), they were forced to in a sense because they didn't want VMware to insert themselves in-between Windows and the raw hardware. And the only way to compete against VMware would be to make their own hypervisor that could run on the raw hardware.

Rather than build this from scratch, Microsoft send a bunch of engineers to Cambridge, England, to meet with a team of folks who were working on an open source hypervisor called "Xen." The two groups learned a lot from each other. Microsoft announced that they would build their own competing product to ESX Server codenamed "Viridian," and the Xen team released the "Xen" open source hypervisor. (Early versions of Xen used a "paravirtualization" technique that required customization to the guest OS, essentially meaning Windows could never be used as a guest. But later versions of Xen could work with Windows.)

When Microsoft announced Viridian, they also announced several management features (live migrations, provisioning, etc.) that would let them compete head-to-head with VMware. People began to openly wonder whether VMware would be able to survive against Microsoft once Microsoft entered their bailiwick.

Of course in typical Microsoft fashion, the Viridian release date became closer and closer, two things happened: features were removed from the product, and the release date was pushed further and further back. (It ultimately landed with Viridian having basically no advanced features with a target release date of six months after Windows Server 2008 was released.)

Meanwhile, Microsoft and the open source Xen group continued to stay close. Through a series of announcements in 2006 and 2007, they announced that they would create common APIs and interchangeable virtual disk and virtual machine formats.

Xen's hypervisor was still mostly used in open source environments, although several people from the original Xen open source team in Cambridge created a for-profit company called "XenSource" that would create commercial management and add-on tools for open source Xen implementations.

During this time in Citrix's world, Citrix was trying to expand beyond pure Presentation Server in the application and desktop delivery space. They had Presentation Server. They had Provisioning Server (Ardence). They had Desktop Server. But they didn't have a hypervisor or the related management tools. For that customers had to go to VMware. They had a fantastic disk image provisioning product, but no hypervisor to manage with it.

So Citrix started sniffing around looking for a hypervisor to buy. They considered a few options, but in May 2007 ultimately chose to go buy XenSource. What's interesting here is that the Xen hypervisor was (and will remain) open source. What Citrix actually planned to buy was the commercial for-profit company that made management add-ons for the free Xen hypervisor. This would put Citrix in a position where they didn't really need VMware. In fact, it would make the two companies head-to-head competitors.

VMware was planning to go public in August 2007, so even though Citrix finalized the deal to buy XenSource (for $500m!) a few months before that, Citrix decided to announce it the day before VMware's IPO. This was a brilliant (yet gutsy) move on Citrix's part, because every single news article about VMware's IPO also mentioned Citrix. It was on.

Once Citrix's acquisition of XenSource closed (in October 2007), the company moved quickly to rebrand and rebuild itself around virtualization and the XenSource products. Citrix renamed XenSource's server product to "XenServer." In January 2008 they announced "XenServer Platinum," which bundled Citrix Provisioning Server with XenServer. They renamed Citrix Desktop Server to Citrix "XenDesktop," also bundling Provisioning Server with that product. And finally, keeping with the "Xen" branding, Citrix renamed "Presentation Server" to "XenApp."

So Citrix WinFrame, MetaFrame, MetaFrame Presentation Server, Presentation Server, and XenApp. That is the naming lineage of the product that this book is about.

-
 
 




Our Books


Comments

Guest wrote Tempted..
on Mon, Mar 3 2008 1:14 PM Link To This Comment
Brian, after reading this excerpt from your new book, I'm tempted to buy (pre-order?) it now :)
Clayton wrote Our collective past, present and future
on Mon, Mar 3 2008 4:01 PM Link To This Comment

The first couple paragraphs made me smile from ear to ear.  I worked with pcAnyWhere and Cubix as well.  Even had to configure a small 4 modem Galacticomm BBS so external users good drop off files and retrieve files.  Must have started a bit earlier with Citrix though.  I recall using the WinView OS/2 based product and struggling to configuring printers via the command line.  All the sudden you become the "Citrix guy".  This is a great overview of where the industry has been and where it is most likely headed.  When the book is ready, save one for me too.

Guest wrote good review
on Mon, Mar 3 2008 6:17 PM Link To This Comment

Brian:

You forgot the "XP" name.

Citrix will not admit it but they were trying to anticipate the name MS was giving to Windows 2003 server OS.

MS previouly used the same name for the WS OS as the server OS in "Windows 2000."

"Citrix Metaframe XP Presentation Server with Feature Release 2" was actually one of the product names.

This is a case of Citrix Marketing going wild.

It took me a while but I do like the simplicity of the "XenApp" name.

Guest wrote One slight mistake
on Tue, Mar 4 2008 8:48 AM Link To This Comment
I may be splitting hairs here but I think Citrix announced the XenSource acquisition the day after the VMWare IPO not the day before. All in all, a really entertaining summing up of how Citrix has progressed. No mention of NetScaler, WANScaler, EdgeSight, Password Manager etc. although I guess they are not integral parts of the history of SBC. Just out of interest, will they be dealt with in separate future chapters? Also, will the rest of the book be written in this style or will it become more technical? I have limited technical knowledge so really enjoyed reading the above.
Guest wrote ?
on Wed, Mar 5 2008 4:50 PM Link To This Comment
was your blue when you wrote this?
Guest wrote Re: ?
on Wed, Mar 5 2008 5:17 PM Link To This Comment
What does that mean?!?  Someone needs to get off the sauce!
Guest wrote First exposure...
on Wed, Mar 12 2008 12:14 PM Link To This Comment

This did make me laugh!  Recalling my first exposure to Citrix back in 97 was for a local authority who were 100% Unix with green screens and terminal emulation packages none of which were Y2k compliant.  Give the project to the new guy why dont ya!!!  12 months down the line we're rolling out Tektronix WinDD (Winframe 1.7 cloned) and over 300 pizza box Thin Clients.  Office 97, NetTerm and Netscape Navigator formed the core of the apps!  How good were mandatory profiles and Outlook prf files :)  10yrs down the line and I'm longing for such simple pleasures once more (sigh!!!)

Guest wrote Re: First exposure...
on Wed, Mar 12 2008 12:19 PM Link To This Comment
OMG I nearly forgot... a bank of 33k modems hung on the back of one of the huge Compaq 6500s (with 4x Pentium Pro 200 cpus, 1gb RAM and massive 9gb SCSI drives) for remote access to the lucky few!  Call-back feature kept me in free internet access for months!!!
Guest wrote Don't forget Winview
on Thu, Mar 13 2008 8:27 AM Link To This Comment

One addition, It was called Winview prior to Winframe http://en.wikipedia.org/wiki/Citrix_Systems : )

Good stuff Brian! We used Winframe 1.6 with Wyse Wintermals to publish Netscape (first library in the nation to have graphical internet) to our Patrons.  We had 8 servers servicing about 600 terminals. When we saw the potention of Citrix we also published CD Rom programs like MorningStar, Westlaw,PhoneDisc, Amercian Business Disc and Haines Real estate from our admin building. Previously we had 4 workstation lans with a server for the cd rom products at 6 of our branches and we had to run out to them EACH WEEK to the six locations and update the disks for each of the programs. A real PITA. I got the idea to try and hook a CD Tower to one of the servers and try and publish the CD Apps. It worked! And we were able to remove the 6 servers from the branches and handle all the disk changing at out admin building. It saved time, gas and hardware. I had companies/other libraries calling me asking how we did it. I started thethin.net so there could be a reference point to help these people calling me get up to speed. From there the Thin discussion list was started ( http://www.freelists.org/list/thin ) and the rest is history. We have beenu using Citrix at the Library now for 12 years!! The industry has changed so much over the years. Lots of millionaires were made from the early advertisers on thethin.net, companies like RTO (formerly softblox), Quest Provision (formerly EOL), Softgrid, EG, all got there momentum from it.  (and here I am still poor LOL)

I think the whole virtualization thing is great for servers but I have to agree with you that a VDI still makes me scratch my head. Ron Oglesby did at great talk at one of the first Brifoums on scaling desktops on VMWare and the conclusion is that it really is not cost effective because of the amount of hardware needed. The ardence and Softgrid factors are compelling but it all still comes down to management. As all of these products get more integrated we should see mass improvement.

AND when everyone in the world finally has a Fiber connection and we are all running at gigabit speeds who knows where the OS is going to end up.  There is just too much cool stuff coming out. Did you see for example where they were going to add Email to the Ipod Touch? Have you had a look at the Amazon Kindle ebook reader? Built in FREE evdo Internet. Download NY times, magazines, books, surf WAP web sites. God I wish they would add that to the iTouch!

The devices that people are going to want to run their apps are getting smaller and the face of the industry is going to change once again I think.  Citrix should seriously start looking at a web client for the Itouch for examle. Apple should get in bed with them with it. They would sell more devices thats for sure!

At any rate it is an exciting time to be in the SBC community. With it being so closely meshed with virtualization (note for example the MS site links terminal services stuff off their new virtualization page for example) we are finally getting recognized and TS is getting a little respect that it deserves.

Ok I am done rambling,

Jim Kenzig
Still a Citrix CTP
Microsoft MVP Windows Server-Terminal Services

 

 

  

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.