Citrix MetaFrame XP Network Design

Note: This paper is excerpted from the book, Citrix MetaFrame XP: Advanced Technical Design Guide, Including Feature Release 2. Diagrams and figures are only available in the PDF download.

Note: This paper is excerpted from the book, Citrix MetaFrame XP: Advanced Technical Design Guide, Including Feature Release 2. Diagrams and figures are only available in the PDF download.

In this chapter, we’re going to look at the considerations you need to take into account when designing your MetaFrame XP network architecture. By “network architecture,” we mean the MetaFrame XP environment as it relates to the network, not the specifics of individual servers.

It is crucial that your MetaFrame XP architecture is able to support your users over your existing network. Regardless of whether you’re planning on scaling your MetaFrame XP environment to support worldwide users, or you’re building one server that may be the foundation for the future, you should address several things, including:

  • MetaFrame XP server placement. (The location of your servers on the network.)

  • MetaFrame XP server farm design.

  • Server farm zones.

  • IMA data store usage.

We’ll close the chapter with a case study that details a real world design for a toy company. We’ll look at several possible designs that this company considered and the advantages and disadvantages of each option.

Now, before we start looking at the details of the MetaFrame XP network architecture, it’s important to fully understand Citrix’s new Independent Management Architecture (IMA). Let’s take a peek now.

Independent Management Architecture (IMA)

MetaFrame XP is the first Citrix product that uses Citrix’s Independent Management Architecture (IMA). The Citrix product literature describes IMA as if it’s a magical solution that makes working with MetaFrame XP effortless. In the real world, MetaFrame XP’s IMA consists of two components that we actually care about.

Independent Management Architecture is:

  • A data store, which is a database for storing MetaFrame XP server configuration information, such as published applications, total licenses, load balancing configuration, MetaFrame XP security rights, and printer configuration.

  • A protocol for transferring the ever-changing background information between MetaFrame XP servers, including server load, current users and connections, and licenses in use.

In MetaFrame XP, IMA does not replace the ICA protocol. The ICA protocol is still used for client-to-server user sessions. The IMA protocol is used for server-to-server communication in performing functions such as licensing and server load updates, all of which occur “behind the scenes.”

Figure 3.1 MetaFrame XP network communication

If you’re familiar with previous versions of MetaFrame, MetaFrame XP’s IMA does replace the ICA Browser Service. Not to be confused with the ICA protocol, the ICA Browser Service (in previous versions of MetaFrame) was used to replicate MetaFrame server configuration information between servers. This was needed because that information was stored in the local registries of each server. (They didn’t use a central database, like IMA does). That ICA Browser Service was notoriously bug-ridden, extremely chatty, and didn’t scale very well. Today, of course, all of that information is stored in the IMA data store. (For more information about integrating MetaFrame XP with the previous version of MetaFrame, see Chapter 13.)

Today, every MetaFrame XP server runs the “IMA Service.” This service is the actual component that communicates with the IMA data store and other MetaFrame XP servers. Additionally, this IMA service communicates with the Citrix Management Console to allow administrators to manage and configure servers.

Placement of MetaFrame XP Servers

The first major thing you need to consider when designing your MetaFrame XP network architecture is the physical placement of MetaFrame XP servers on the network.

The key here is to determine where MetaFrame XP servers should be located in relation to the data and the users. In simple cases, this determination is not very difficult. Consider the environment in Figure 3.2 consisting of two office locations. Let’s assume that users from both offices need to access a database-driven application housed in the main office.

Figure 3.2 Users in two offices need access to the same database application

This company’s IT department has decided to use MetaFrame XP to ease application deployment and to get the best possible performance for remote users. The company is faced with two choices when it comes to the location of the MetaFrame XP server for the remote users: they can put the MetaFrame XP server at the remote office with the users, or at the main office with the database.

While both choices would allow the company to manage the users’ applications, putting the server near the database will yield the best performance. (See Figure 3.3.) This is because the network traffic between the database and the client application running on the MetaFrame XP server is much heavier than the ICA user session traffic between the MetaFrame XP server and the end user. By placing the MetaFrame XP server at the main office, the database client software that is installed on the MetaFrame XP server is located near the database itself. Application performance is excellent due to this close proximity, and only MetaFrame XP ICA session traffic has to cross the expensive, slow WAN link.

Figure 3.3 A MetaFrame XP server at the main office

Now consider the other possible server placement option for this company. If the MetaFrame XP server were located at the remote office (as in Figure 3.4 on the next page), the heavy database traffic would still have to cross the WAN while the light, efficient ICA session traffic would be confined the remote office’s local LAN, where bandwidth is plentiful. The server located at the remote office would not help application performance from an end user’s point of view because the level of database traffic on the WAN is no different than if they weren’t using MetaFrame XP.

Figure 3.4 MetaFrame server placement at the remote office

As this simple example shows, it’s desirable to place the MetaFrame XP server close to the data source instead of close to the users. MetaFrame XP’s ICA protocol is designed to work over great distances and slow WAN links. This allows heavy application data traffic, flowing between the MetaFrame XP server and the data server, to remain on a local LAN.

Why should you care about server placement?

As shown in the previous example, the placement of your MetaFrame XP servers will directly impact several areas, including:

Users’ session performance.

Network bandwidth usage.

Server management.

Users’ Session Performance

The performance of the users’ sessions depends not only on the network speed between the user and the MetaFrame server, but also between the MetaFrame server and the data the user needs to access. It does no good to put a MetaFrame server on the same LAN link as a user if that server must access files that are located across a 56K connection.

However, this must be balanced with the network latency between the user and the MetaFrame XP server. Users won’t want to use MetaFrame applications if there’s a two second delay from the time they hit a key until the time the character appears on their screen.

Network Bandwidth Usage

Network bandwidth usage is directly affected by the location of the MetaFrame XP servers. Average MetaFrame XP ICA user sessions only require about 20KB per second. Many n-tier business applications (such as Baan, SAP, and PeopleSoft) require much more than that. If your MetaFrame XP server is on the wrong side of the network then you won’t save any bandwidth by using MetaFrame.

Server Management

Ultimately someone is going to need to maintain and manage the MetaFrame XP servers. It’s usually much easier for administrators to maintain them if the application servers and the MetaFrame XP servers are both at the same physical location.

What are the server placement options?

Even after looking at the complexities that arise when deciding where to put your MetaFrame XP servers, there are still really only two possible solutions.

  • Distribute the servers throughout your environment, balancing some near each data source.

  • Put all MetaFrame XP servers in the same place, in one big datacenter.

As with all decisions, each point has distinct advantages and disadvantages that must be considered when designing the final solution.

Option 1. MetaFrame XP Servers Placed in many Locations

When users need to access data that is in multiple geographic areas, multiple MetaFrame servers can be used, with some servers in each location, physically close to each data source.

By placing MetaFrame XP servers in multiple locations throughout your environment (see Figure 3.5), a user can concurrently connect to multiple MetaFrame XP servers. This allows each server to have quick, local access to the data. An added benefit of this is that there is not one single point of failure. Losing access to one data center only affects some applications.

Figure 3.5 Multiple MetaFrame XP servers provide fast access to data

MetaFrame XP can even be configured to automatically route users to secondary servers in the event that a user’s primary servers are inaccessible. (See Chapter 4 for details on how to do this.)

The downside to having MetaFrame servers in multiple locations is that your overall environment becomes more complex. Servers must be managed in several physical locations. User access must be designed so that they can seamlessly connect to multiple MetaFrame XP servers. On top of all this, it is inevitable that some data will only exist in one place, and that users will need to access it from every MetaFrame XP server, regardless of location. (Windows roaming profiles are a good example of this.) Lastly, a multi-server MetaFrame XP environment requires that each MetaFrame XP server communicates with other MetaFrame XP servers to transfer background information. When all MetaFrame XP servers are located on the same LAN, managing this communication is not an issue due to the high availability of bandwidth. However, when MetaFrame XP servers span multiple physical locations connected by WAN links, this communication must be managed. (Managing this communication is certainly possible; it just becomes another thing that must be planned for.)

As you can see in Figure 3.5, there are several advantages and disadvantages to placing MetaFrame XP servers in multiple locations throughout your environment.

Advantages of Placing Servers in Multiple Locations

  • Users’ MetaFrame XP sessions are always close to their data.

  • Efficient use of WAN bandwidth.

  • Local departments can own, control, and manage their own servers.

  • Increased redundancy.

Disadvantages of Placing Servers in Multiple Locations

  • More complex environment.

  • Users may need to connect to multiple MetaFrame XP servers in order to use all their applications.

  • Your servers might require additional local (onsite) administrators because they are not all in the same building.

Option 2. All MetaFrame XP Servers in one Central Location

Instead of sprinkling MetaFrame XP servers throughout your environment, you can put all of your servers in one datacenter (see Figure 3.6 on the next page). After all, providing remote access to Windows applications is what MetaFrame XP is really designed to do.

Figure 3.6 All MetaFrame XP servers in one datacenter

Having one central datacenter that contains all of your MetaFrame XP servers is easy to administer, but it causes other issues to arise.

For example, any users that need to access data outside of the data center where the MetaFrame XP servers are located must do it via a WAN link. While the performance of the ICA session between a user and MetaFrame XP server won’t be a problem, significant performance problems could exist within the application sessions themselves due to potential great WAN distance between the MetaFrame XP server and the user’s data.

Different applications handle data latency in different ways, but your users will become frustrated if they have to wait a long time to open or save files. Additionally, WAN bandwidth might be wasted because users would be forced to connect to all MetaFrame XP applications via the WAN.

Advantages of Placing all MetaFrame XP Servers in one Location

• Simple environment to administer.

• Users can connect to one MetaFrame XP server to run all of their applications.

• MetaFrame XP servers are all in the same physical location.

Disadvantages of Placing all MetaFrame XP Servers in one Location

• Access to data may be slow if the data is located across a WAN.

• WAN bandwidth may be wasted because users would be forced to connect to a remote server for any MetaFrame XP application.

• No option for local MetaFrame XP servers (local control, local speed, etc.)

• Single point of failure.

As you can see, the location and placement of your MetaFrame XP servers will directly impact many aspects of your MetaFrame XP environment. While part of the design will be easy, other aspects will take some time and thorough planning.

Considerations when Choosing Server Locations

The previous example showed that the data location directly affects the placement of the MetaFrame XP server. However, in the real world, there are many more factors than were outlined in this simple example. The subsequent list includes all necessary considerations and is followed by descriptions of why each item is important.

• Where are the users?

• Where is the data?

• How much (and what type) of data is each user going to need?

• How many different applications are the users running?

• Where is the IT support for the applications?

• What does the WAN look like?

User Location

The location of the users is a major factor to consider when deciding where to put the MetaFrame XP servers. Are all of the users in one central location, or are there multiple pockets of users? Is there a datacenter at every location where the users are, or are the users at remote offices?

Data Sources

The data that users need to access from within their MetaFrame XP sessions is probably the most important consideration when deciding where to put your servers. When you look at the sources of data, it is important to consider all types of data that a user may need to access from a MetaFrame XP session. This includes back-end application data and databases, as well as files and file shares, home drives, and Microsoft Windows roaming profiles. (See Figure 3.7)

Are the users at the same physical location as all of their data sources? Is all application data at the same location on the network as users’ home drives and Windows roaming profiles, or will users need to pull data from multiple network locations for a single session?

Figure 3.7 Users often need to access multiple types of data from one session

When considering the data that users need to access, consider how each data source will be used throughout their sessions. Will they need to access the data only during session startup or shutdown, or will they need constant access throughout the entire session? For each data source, will users only need to read the data, or will they need to write as well?

Lastly, consider the impact of each data source on the users’ sessions. What happens if the path to each data source is congested? Will users be merely inconvenienced, or will they not be able to do their job?

To help understand the importance of these questions, refer to Figure 3.8 (facing page). This diagram details a situation that is becoming more and more common as organizations grow.

In this example, a user works for a company with a worldwide presence. Apparently this company followed the advice of consultants from the nineties, because all of their crucial business data has been consolidated into one single database in the US. Obviously, one of the main reasons that this company chose to use MetaFrame XP is so that their European users can have fast access to the database application. This company put a MetaFrame XP server in the US, right next to the database server, allowing the European user to access the database through a bandwidth-efficient ICA session. Sounds great! Very simple. Unfortunately, in the real world it is not always as simple.

In reality, the European user must access applications other than that one US database. Since the user is already running applications via ICA sessions to a MetaFrame XP server in the US, they might access other applications via that same MetaFrame XP server, right?

Figure 3.8 A user in Europe needs to access data throughout the world

Let’s think about this before we jump to an easy answer. Should a European user really be accessing all applications via servers in the US? Sure. If the user is already crossing the WAN to connect to the database, there is no real impact to adding more applications. But will the user always be utilizing the database? What if the user just wants to use other applications? Should the company pay for the transatlantic bandwidth so that the user can create a PowerPoint presentation? What about the user’s home drive? Most likely, the user will want to save files and work with others. Should he use PowerPoint running on a US server while saving files to a file server in Europe? What about PowerPoint’s auto-save feature? Will this user have the patience to wait while his file is auto-saved across the ocean WAN every ten minutes while he’s trying to work?

The point here is that users need to connect to multiple data sources, and they frequently need to access data that resides in many different geographic regions. While this European example is a geographic extreme, the same ideas apply anywhere. A slow WAN is a slow WAN. The previous example also applies to users in Washington DC accessing databases 30 miles away in Baltimore over a 56k frame relay.

This example illustrates a situation in which a user only needed access to a database and a home drive. Other users may need to access files and data from many different groups in many different locations. Also, don’t forget about Windows roaming user profiles. If one single roaming profile is to be used for all MetaFrame XP sessions on servers throughout the world, then that profile needs to be accessible to the user wherever they log on. (More on roaming profiles in Chapter 5.)

If a user only needed to access data from one geographic region, the design would be simple. You would just put a MetaFrame server next to the data and have the user connect via an ICA session. However, multiple geographic regions that all have important data for the user increase the complexity of the design.


The number and types of applications that you want to make available via MetaFrame XP also affect the decision as to where the servers should be located. The application mix needed by one user may dictate that the user must connect to multiple MetaFrame XP servers. Some users may only need to access applications on single MetaFrame XP servers while others may need to access applications across departments via many MetaFrame XP servers.

The mix of local applications and remote MetaFrame XP applications is also a factor. Will any applications be loaded locally on the users’ computers or will everything be done via MetaFrame XP? If everything is done via MetaFrame, and the MetaFrame servers are located across the WAN from the users and the WAN link goes down, all productivity stops. Is that an acceptable risk to the organization or should some servers and data be local—even though all data may not be local?

IT Support of Applications

How does your organization’s IT department support applications? If all application support is from one site then it makes sense for all MetaFrame XP servers to be located at that site. However, large organizations usually have many applications that are supported by different people from different locations, as shown in Figure 3.9 on the next page. In these cases you may have to place MetaFrame XP servers in multiple locations, with each server placed near the people that support its applications.

WAN Architecture

The wide area network can also affect where MetaFrame XP servers should be located. If bandwidth is congested, MetaFrame XP servers should be located across WAN links because they are generally more efficient than the native applications over WAN links.

Figure 3.9 Application support from multiple people in multiple locations

MetaFrame XP Server Farm Design

Remember from Chapter 1 that a server farm is a logical group of MetaFrame XP servers that are managed together as one entity, similar in concept to a Microsoft Windows domain. With MetaFrame XP, one single farm can scale to hundreds of servers to support a very large enterprise. Of course, there are also reasons that organizations might choose to have multiple, smaller server farms.

When deciding on the boundaries that separate server farms, one geographic location does not always correlate to one server farm. There are many situations that call for a server farm to span multiple locations, even if those physical locations are connected via slow WAN links. Conversely, there are also reasons that one physical location would need to have multiple MetaFrame XP sever farms, or even multiple farms within one datacenter. The decision as to the number and locations of farms needs to balance technical and business requirements.

Even though we’ve used the analogy of Microsoft Windows domains to explain the concept of MetaFrame XP server farms, the server farm boundaries do not have to be aligned to Windows domain boundaries. One server farm can span multiple domains, or one server farm can be made up of MetaFrame XP servers that belong to different domains.

Why should you care about server farm design?

You should design you server farm boundaries independently of your MetaFrame XP server locations. This means that you should first decide where you’re going to put your servers. Only then should you start to think about their farm memberships and how many farms you will have.

The decision to create one all-encompassing server farm or several smaller server farms impacts several areas in your environment, specifically:

• Ease of management.

• Network bandwidth usage.

• End users’ ability to logon.

Ease of Management

All of the MetaFrame XP servers in one farm can be managed together. They are managed via the same tool, and many security and configuration settings are configured on a farm-wide basis. MetaFrame XP servers that belong to different farms must be managed separately. Applications cannot be load-balanced across servers in different server farms (although users can simultaneously run sessions on MetaFrame XP servers that are members of different farms).

Network Bandwidth Usage

MetaFrame XP servers in a server farm must communicate with other servers in the same farm. (Remember the new IMA server-to-server communication protocol?) Therefore, having many servers in a farm increases network communication overhead.

End Users’ Logon Ability

One single server farm can span multiple domains. In these multi-domain farms, it is possible for one published application to be load-balanced across MetaFrame XP servers that are members of different domains. If this is the case, users may experience intermittent logon problems if they are routed to a load-balanced server from another domain that does not trust their domain.

What are the server farm design options?

When thinking about your MetaFrame XP server farm layout, there are really two choices. You can build large farms that include MetaFrame XP servers from multiple geographic areas, or you can build multiple small farms, each including a single group of MetaFrame XP servers.

Note that the “geographic area” referred to here relates to geographic area of the MetaFrame XP servers only, which is why we design the server farm after we design the placement of the MetaFrame XP servers. If you have users all over the country, but you have decided that all the MetaFrame servers will be in one location, then that would be considered a single “geographic area,” in regards to server farm design.

Option 1. One Large Server Farm

There are many advantages of having one large server farm. Remember, one large server farm does not mean that all of your servers must be in the same physical location. Some companies have MetaFrame XP servers in several datacenters throughout the world, with all of those servers belonging to the same server farm.

By creating one server farm, all administration can be done by one group of people. Farm-wide changes affect all the MetaFrame XP servers in the company because all company servers are in the same farm. Additionally, each user only needs a single ICA connection license, even if they connect simultaneously to multiple MetaFrame XP servers in different parts of the world.

Unfortunately, there are also drawbacks to the single farm model. Because all farm servers need to keep in communication with each other, farms that span slow WAN links will need to use part of that link for farm communication. (A farm can be divided into logical “zones” that help manage that communication. These zones are covered in the next section.) Also, server farms are designed to be administered as a group, and this administration is “all or nothing.” You can’t give some people administrative rights to some farm servers while preventing them from administering others. This becomes a real issue if you have farms that span great distances and you want local administrators to be restricted to managing local servers only.

Advantages of Creating One Large Server Farm

• Efficient license usage.

• Single point of administration.

Disadvantages of Creating One Large Server Farm

• Cannot segment farm server administration.

• Intra-farm network communication.

• All farm-wide settings must apply to all servers.

Option 2. Multiple Smaller Server Farms

Many companies choose to segment their MetaFrame XP servers into multiple farms. Again, this segmentation does not have to follow geographic boundaries. Some companies have several server farms for MetaFrame XP servers that are in the same datacenter because different departments have different needs, users, and servers.

Splitting your MetaFrame XP environment into multiple farms allows local groups or departments to manage their own servers and to purchase and manage their own licenses. Larger companies with several locations can save WAN bandwidth by keeping MetaFrame XP servers on both sides of the WAN in separate farms.

Of course, if separate farms are created then one user connecting into multiple farms will need a MetaFrame XP connection license for each farm that is used. (See Chapter 14 for full license usage details.) Also, any enterprise-wide changes made by MetaFrame XP administrators will need to be manually configured for each farm.

Advantages of Creating Multiple, Small Farms

• Intra-farm network communication is not as broad.

• Departmental licensing.

• Local administration.

• Different farms can have completely unrelated configurations and settings.

Disadvantages of Creating Multiple, Small Farms

• One user connection to multiple farms requires multiple licensees.

• Enterprise-wide configuration changes need to be separately applied at each farm.

Considerations when Designing your Server Farm

There are several considerations that will help you make your decision as to whether you will have one large farm or several smaller farms. If you choose to have several smaller farms, these can also help you choose your farm boundaries and how you should segment your server farms.

• How will your MetaFrame XP servers be administered?

• How much network bandwidth is available?

• What are your licensing requirements?

• What is the Windows domain / Active Directory design?

• Where are the users located?


The desired administration of your MetaFrame XP environment will drive the server farm design. A MetaFrame XP server farm is designed to be managed as one group. Because of this, any farm administrative rights that you grant to users in your farm apply to all servers in the farm. It is not possible to grant users administrative rights on some servers while preventing them from administering others in the farm. If you need to segment the administration of MetaFrame XP servers, then you need to create multiple server farms.

Feature Release 2 for MetaFrame XP does introduce the concept of segmented administration. However, this administration is segmented by role, not by server. What this means is that if you have a large farm, you can give some users administrative rights over certain roles, such as printer management or application management, while preventing them from having the ability to change the network configuration of servers or add new servers to the farm. The problem with this is that these rights also apply to all servers in the farm. Users who are only granted the right to manage printers have that right on all farm servers. There is still no way to let some users administer certain servers while preventing them from administering other servers in the same farm.

Network Bandwidth

MetaFrame XP servers in server farms need to communicate with each other. For this communication, consistent network connectivity is needed. If you have any network connections that are extremely limited or unreliable, you may not want to span one farm across them, choosing instead to create two farms, one on each side.


MetaFrame XP licensing is connection-based, which means that one user can simultaneously run sessions on multiple MetaFrame XP servers in the same farm and only use one license. However, if one user connects to servers in two different server farms, one license is required for each server farm. If you want users to only use one license, you must put all of the MetaFrame XP servers they use in the same server farm.

Windows Domain / Active Directory Design

The design of your underlying Windows NT domain or Active Directory can also impact your MetaFrame XP server farm design. There is no problem with having multiple MetaFrame XP server farms in one Windows domain. However, the opposite is not necessarily true. Ideally, a farm should not span multiple NT domains or Active Directory forests. While there is no technical reason that one server farm could not span multiple domains; management becomes much more complex. (Refer to Chapter 17 for more details.)

User Location

When designing server farm boundaries, you need to think about the locations of your MetaFrame XP servers in addition to the locations of the users that will be accessing the servers. For example, if you have decided that you need to have multiple groups of MetaFrame XP servers in different geographic areas, but users from each area only connect to their local MetaFrame XP servers, then you can easily make the decision to create multiple server farms.

Server Farm Zones

MetaFrame XP server farms can be partitioned into multiple logical segments called “zones.” Every server farm has at least one zone (which is created by default when the server farm is established). As an administrator, you can add additional zones and reconfigure existing MetaFrame XP farm servers so that they belong to zones other than the default zone.

Server farm zones in MetaFrame XP serve two purposes:

• Zones allow for efficient collection and aggregation of MetaFrame XP server statistics.

• Zones allow for efficient distribution of server farm configuration changes.

As you have probably guessed, zones are intended to allow server farms to grow efficiently. Large server farms that have their zones properly designed will allow end users to have quick and efficient logons and server connections. A server farm that is partitioned into multiple zones can contain many more servers than a farm that has only one zone. Server farm zones only affect the technical communication aspects of server farms. Zone configuration does not affect any administration or security components.

All MetaFrame XP servers must belong to a server farm, and every server farm must have at least one zone. Therefore, every MetaFrame XP server must also belong to a zone. One zone can contain up to 100 MetaFrame XP servers before performance dictates that more zones should be created. However, this does not necessarily mean you must wait until you have 100 servers before you should create more zones. In the real world, your network architecture will drive the number of zones that you will create.

Each MetaFrame XP server monitors itself for many events, including user logons, logoffs, connections, disconnections, and its server load and performance-related statistics. This self-monitoring is necessary for load-balancing and license tracking to work. Each server actively tracks its own statistics and sends periodic reports to a central location. It is not practical for every single MetaFrame XP server to notify every other MetaFrame XP server in the farm each time one of these user or performance metrics changes. (This is how MetaFrame 1.8 worked and is the reason why it didn’t scale very well.) However, it’s important that all MetaFrame XP servers know the current status of all the other servers so that farm-wide load balancing and license tracking can function.

As shown in Figure 3.10 (on the next page), a lot of network communication occurs between MetaFrame XP servers in a zone. Considering the fact that the environment in the diagram is made up of only nine servers, you can imagine how much communication would take place if this environment was made up of twenty or thirty servers. This is where zones become necessary.

Figure 3.10 The communication between multiple servers in one zone

A server farm zone is a logical group of MetaFrame XP servers. These MetaFrame XP servers exclusively communicate user load, performance, and licensing statistics with each other. One chosen server within each zone communicates with one chosen server from each of the other zones. Within this model, only one server from each zone sends server-to-server communications throughout the farm, instead of every single MetaFrame XP server trying to communicate with one single MetaFrame XP server.

The “chosen” MetaFrame XP server that communicates with other zones is known as the Zone Data Collector (ZDC). There is only one ZDC per zone, and every zone must have one. The ZDC is the server responsible for knowing all up-to-date statistics about every MetaFrame XP server in the zone, including user load, performance load, and license usage. Whenever any of these monitored parameters changes on any MetaFrame XP server, that server sends notification of the change to its local zone’s ZDC. The ZDC then notifies all other ZDCs of the other zones in the farm.

All ZDCs from each zone within a server farm maintain an open connection with all other ZDCs in the farm, forming a hierarchical communication chain that ultimately touches every MetaFrame XP server in the farm.

Figure 3.11 (facing page) shows what the server from Figure 3.10 would look like if it was partitioned into multiple zones. As you can see, network communication is vastly reduced when compared to the diagram that only had one zone.

Because each ZDC must maintain an open link to all other ZDCs in the farm, you should try to keep the total number of zones in the farm as low as possible while still having enough zones to be efficient. There is a fine line between too many zones and not enough. We’ll take a more in-depth look at this in a bit. For now, remember that any time a user logs on or off, connects or disconnects, or any server load changes, server updates are sent to all ZDCs in the entire farm.

Figure 3.11 Multiple MetaFrame XP servers in multiple zones

Zone Data Collector (ZDC)

The Zone Data Collector is a role that one MetaFrame XP server performs for each zone within a server farm. You do not have to explicitly configure a server to be a ZDC, because anytime there is more than one MetaFrame XP server in a zone an election takes place to choose the one server that will act as the ZDC. You can, however, change the election preferences of individual servers to affect the outcome of an election, essentially allowing you to select which server you would like to become the ZDC. These preferences range from “most preferred” to “least preferred.” Election preference settings are set via the Citrix Management Console (CMC) in the server farm’s properties box. (CMC | Farm | Properties | Zone | Highlight Server | Preference)

Zone Data Collector Elections

Zone elections take place automatically within each zone to designate the MetaFrame XP server that will act as the Zone Data Collector. The outcome of the election is decided by the following three criteria, listed in order of precedence:

1. Software version number. (The newest version will always win.)

2. Manually configured election preference. (As configured in the CMC.)

3. Host ID. (The highest host ID will win.)

As you can see by the election criteria, the software version number carries a higher precedence than the manually configured preference in the CMC. This is like an insurance policy, just in case Citrix ever decides to make any radical changes to the operation of the ZDC (in the form of a hotfix or service pack, for example). By designing the election criteria this way, Citrix ensures that the ZDC will always be the most up-to-date server in the zone. As an administrator, it is important to remember this version precedence, especially when you are testing new or beta versions of MetaFrame XP software. If you install a new test version of MetaFrame XP into an existing production zone, the test server will become the ZDC because it will be a newer build than your existing production MetaFrame XP servers. Of course, this can easily be avoided by installing test servers into their own server farms, or at least their own zones.

The final election criterion, the “host ID” parameter, is essentially a tie-breaker if the first two items are the same on more than one server. The host ID is a random number that is generated when MetaFrame XP is installed. The server with the highest host ID will win the ZDC election. You cannot change the host ID. If you would like to change the outcome of an election then you should simply change the “Election Preference” parameter of a server in the CMC.

Now that you understand how the outcome of a ZDC election is determined, let’s look at what causes an election to take place.

There are several events that initiate a ZDC election, as outlined below. Any one of these “triggers” can cause an election and there is no order or precedence. Any MetaFrame XP server can call an election by sending out an “election request.” Election requests are sent out when any of the following events occur:

• A MetaFrame XP server loses contact with the ZDC. (That MetaFrame XP server will send out an election request.)

• The ZDC goes off-line. (If the ZDC is shut down gracefully it will send out an election request before it shuts down its local IMA service. If the ZDC is unexpectedly shut down then the next MetaFrame XP server that tries to send an update to it will notice that the ZDC is gone and will send out the election request.)

• A new server is brought online. (It sends out an election request as soon as the local IMA service is started.)

• An election is invoked manually by an administrator. This is done with the “querydc -e command.” (The server where this command is executed sends out an election request.)

• The configuration of a zone changes (when a MetaFrame XP server is added or removed, or a new zone is created). The server that receives the update from the CMC sends out an election request. Depending on the servers affected by the change, election requests could be sent out to multiple zones.

After a ZDC election is complete, if a new server is elected as the ZDC then every other MetaFrame XP server sends the new ZDC its complete status information. If the newly-elected ZDC is the same server as before the election, the other MetaFrame XP servers are smart enough not to resend their information because they know that the ZDC has their up-to-date information from just before the election.

Remember that each ZDC maintains connections to all other ZDCs in the farm. If a ZDC loses an election, it notifies the ZDCs in other zones that it is no longer the ZDC for that zone. If a ZDC goes off-line, ZDCs from other zones figure out that there is a new ZDC when the new ZDC begins contacting them for information.

If you’re familiar with MetaFrame 1.8, then you know about the ICA browser service and the ICA master browser elections. Zones and ZDCs perform similar functions (but are much faster and more reliable). Also, unlike MetaFrame 1.8, there are no backup zone data collectors in MetaFrame XP.

Communication between MetaFrame XP servers and the ZDC

The ZDC maintains the dynamic information of all the MetaFrame XP servers in the zone. Each MetaFrame XP server in the zone notifies the ZDC immediately when any of the following events occurs:

• There is an ICA session logon, logoff, disconnect, or reconnect.

• The server or application load changes.

• Licenses change (used, released, added, or removed).

• A MetaFrame XP server comes online or goes off-line.

• Any published application’s settings change.

• A MetaFrame XP server has an IP or MAC address change.

All of this information is collectively known as “session data.” No session data is stored permanently on the ZDC. It is all kept in memory for use only by the IMA service. You can view any of this data at any time with the queryds command-line utility.

If a ZDC does not receive any communication from a MetaFrame XP server in its zone after 60 seconds, the ZDC will perform an “IMA Ping” to determine whether the server is still online. You can change this interval by adding the following registry value:

Key: HKLM\Software\Citrix\IMA\Runtime

Value: KeepAliveInterval


Data: The interval in milliseconds, entered in hex notation. (The 60 second default would be 60,000 milliseconds, or 0xEA60 in hex.)

When entering the registry value, remember that you can use the Windows calculator to convert from decimal to hex. (View Menu | Scientific | Enter your decimal number | click the “hex” button.)

Communication between Zone Data Collectors

As soon as a ZDC receives an update from a MetaFrame XP server, it forwards the information to all other ZDCs in the farm. If the ZDC fails to connect to one of the other ZDCs, it will wait five mintues and then try again. This five-minute interval is also controllable via the registry:

Key: HKLM\Software\Citrix\IMA\Runtime

Value: GatewayValidationInterval


Data: The interval in milliseconds, entered in hex notation. (The 300 second default would be 300,000 milliseconds, or 0x493E0 in hex.)

Looking at the large amount of frequently-changing session data that a ZDC must deal with leads to one question:

Should you build a dedicated Zone Data Collector?

Once your environment grows larger than a few MetaFrame XP servers you may begin to wonder whether you should build a “dedicated” MetaFrame XP server that acts only as a zone data collector without hosting any user sessions.

The ZDCs of larger zones can get very busy. Building a dedicated server is a good way to minimize the risk that a busy zone will impact live user sessions due to a MetaFrame application server that is too busy acting as ZDC.

There are no hard numbers to dictate the point at which you should build a dedicated ZDC. In the real world, if a zone has more than ten servers or so, people tend to build a dedicated ZDC. Of course hardware is always getting faster and faster, so this number may change.

If you are at the point where you don’t know whether or not you need a dedicated ZDC, the best thing to do is to look at the performance of the server acting as the ZDC and compare it to servers that are not acting as the ZDC. Refer again to the previous section for a list of how much work the ZDC must do.

If you don’t want to dedicate one server to be the ZDC, there is a trick that you can use to still get the best performance possible. All you have to do is pick the server that you want to be your ZDC and configure it for the “most preferred” election preference in the CMC. Then, configure the load balancing for that server so that it takes on fewer users than your other servers. (See Chapter 15 for details.) Doing this will ensure that your ZDC is not impacted by user sessions, allowing the ZDC to perform its tasks as needed.

On the other hand, if you decide to create a dedicated ZDC, the process for configuring it is simple: Install MetaFrame XP on the server; add it to your farm and zone; configure it for the “most preferred” ZDC preference in the CMC; and don’t publish any applications to it.

If you build a dedicated ZDC, be sure to remember that you must install any Citrix hotfixes or Service Packs to your dedicated ZDC first, otherwise you run the risk that your dedicated ZDC could lose a ZDC election to a more up-to-date server somewhere else.

Advantages of Building a Dedicated ZDC

• You won’t have to worry about the ZDC overhead impacting one of your production MetaFrame XP servers that is hosting user sessions.

• Because MetaFrame XP licensing is connection-based, your dedicated ZDC will not require a Citrix license.

Disadvantages of Building a Dedicated ZDC

• You will need to find, buy, steal, or otherwise acquire an extra server.

• You will need to buy a Microsoft Windows server license for that server.

Why should you care about zone design?

Proper zone design is important. There are several areas that are directly affected by the number of zones and the location of the zone data collectors. These areas include:

• WAN performance.

• Application enumeration.

• Application connection speed.

• Farm change propagation speed.

WAN Performance

In a large environment with multiple WAN locations, you need to consider the network bandwidth cost of placing separate zones at each WAN point. Because every ZDC establishes a connection with every other ZDC in the farm, and because all ZDCs update each other whenever anything happens, too many zones will adversely affect WAN bandwidth with all the ZDC traffic. This means that fewer zones are better.

Application Enumeration

When users request a list of available MetaFrame XP applications, the zone data collector is contacted to return the list of applications that are available to that user. If the ZDC is far away or too busy (because the zone is too large), the user will have to wait a long time for the ZDC response that provides them with their application list. This means that more zones are better.

Application Connection Speed

When users connect to published load-balanced applications, the ZDC is contacted to find out which MetaFrame XP servers run the application and which server has the lightest load. As with application enumeration, if the zone data collector is far away or busy, the user will have to wait a long time for the ZDC response that allows them to attach to the appropriate MetaFrame XP server to start their application session. This also means that more zones are better.

Farm Change Propagation Speed

Any farm-wide configuration changes that are made in the Citrix Management Console must be propagated down to every MetaFrame XP server in the farm. Fortunately, the Zone Data Collectors are leveraged for this update. The ZDCs receive the updates from the server running the CMC and forward the changes to the MetaFrame XP servers in their respective zones. The more zones there are, the faster these updates reach every MetaFrame XP Server. This means that more zones are better, but this is not as important as the first three factors.

As you have seen, there are advantages and disadvantages that will apply no matter how many zones you have.

What are the zone design options?

After the complete analysis of how zones work, everything that they affect, and everything that can affect them, it’s finally appropriate to look at the options available when designing zones. There are really only two choices.

With a server farm that spans multiple, physical locations, you can:

• Configure one zone that spans physical locations.

• Configure each location to be its own zone.

Let’s look at the details of each option.

Option 1.  Create One Zone

Because a server farm zone does not have to be confined to a single geographic location, it’s possible to limit WAN communications between locations by creating a large zone that includes MetaFrame XP servers from multiple locations. This is clearly evident back in Figures 3.10 and 3.11.

However, potentially severe consequences could result if only one zone is created for multiple locations. Since this one zone will have only one ZDC, user logons and application enumerations could be slow since each of these actions requires contact with the ZDC which could be on the other side of the WAN. Also, because the ZDC is responsible for distributing farm configuration changes to all MetaFrame XP servers in the zone, one giant zone will force the ZDC to communicate with every single MetaFrame XP server in the zone, potentially sending the same change across the WAN multiple times. (This fact can be seen in the case study at the end of the chapter.)

Advantages of Creating One Zone that Spans Multiple Sites

• All session update information is only sent across the WAN once, to the zone data collector.

Disadvantages of Creating One Zone that Spans Multiple Sites

• User logons, queries, and application enumerations could be slow if the zone data collector is far away from the users.

• More traffic could be generated by MetaFrame XP server-to-ZDC session information updates than is saved by having one zone.

• The MetaFrame XP server-to-ZDC session update information cannot be configured, timed, parsed, queued, or limited (unlike ICA Gateway traffic in MetaFrame 1.8). This is because by definition, it is assumed that all servers in one zone are well-connected, and that bandwidth is plentiful and cheap.

• Farm configuration changes must traverse the network once for every server, because having one zone removes the hierarchy.

Obviously, there can be substantial network performance degradation if only one zone is created when multiple zones are needed.

Option 2. Create Multiple Zones

Splitting a farm into multiple zones is a logical option, especially for larger environments. However, you need to be careful that you do not create too many zones.

Because all ZDCs maintain open connections to all other ZDCs, updates are continually sent between zones. This can affect performance if the bandwidth between zones is limited. There is no way to cut down these updates (unless a third party Quality of Service device is used, like those from Packateer or Sitara. These devices are discussed in Chapter 6.)

Advantages of Creating Multiple Zones

• Local zones allow for fast user logons, application queries, and available application enumerations.

Disadvantages of Creating Multiple Zones

• All background information is replicated to all zone data collectors. (Such is the price for having continuous local, up-to-date information about all zone servers.)

• All zone data collectors need direct access to all other zone data collectors.

In general, you should be careful that you do not create too many zones. Don’t create another zone just for the sake of having it, because too many zones can actually hurt your network performance more than not having enough zones.

Considerations when Designing your Zones

Remember that the way you design your zones does not affect the management of the server farm. The number and locations of zones should be based purely on technical factors, which is why the factors listed here are technical. The answers to the following three questions will directly affect zone traffic, and therefore zone design:

• In what ways will users access the applications?

• How many servers are there?

• What is the bandwidth and connectivity between servers?

User Access to Applications

The ways that users access their MetaFrame XP applications and the configuration of those applications will help you determine your zone design. The following four items need to be considered:

• Number of users. The more users there are, the more zones are needed, as more ZDCs will be needed to service user requests.

• Length of the average session. If users log on to their applications and stay on all day, no session information will change on the MetaFrame XP servers and the ZDC will not be contacted, allowing for larger zones. If users log on and off constantly, ZDC updates will be frequent, causing more ZDC load and requiring smaller zones (more ZDCs).

• Number of simultaneous logons. ZDCs are used most heavily as users enumerate and connect to MetaFrame XP servers. Thousands of users logging in simultaneously may overwhelm one ZDC. If all your users start working at the same time, you may need more zones.

• Number of load-balanced published applications. More applications require more zones because there is more application information that must be updated across the farm via the ZDCs.

When designing zones, you need to consider everything that communicates with the ZDC. In general, the more IMA communication going on inside the farm, the more ZDCs are needed, which means more zones.

Number of MetaFrame XP Servers

One zone can support about 100 servers. This is not a hard limit, but rather a practical performance-based limit determined by internal Citrix research. If you have more than 100 servers in a single zone, you will most likely need to partition it into two zones. Of course, servers will continue to get faster, which means that by the time you are reading this, you can probably build a ZDC powerful enough to support more than 100 servers. At that point, however, you will probably have other reasons to create multiple smaller zones.

Bandwidth and Connectivity Between MetaFrame XP Servers

The available bandwidth between MetaFrame XP servers needs to be assessed when looking at zone design. Keep in mind that every dynamic change to any MetaFrame XP server in the environment is sent first to a local ZDC, which in turn sends the change to the all other ZDCs in the farm. Consider the environment in Figure 3.12:

Figure 3.12 A single server farm with servers separated by WAN links

This WAN environment is configured with MetaFrame XP servers in three separate locations. If three zones are created, every time a dynamic event (such as a user logon) occurs, the local ZDC will send that event to ZDCs in the other two zones. This single event ends up crossing the WAN link two times, once to each other zone data collector. If, instead, the environment is configured as a single zone (with one ZDC), every time a dynamic event occurs, the event traverses the WAN link only once to the central ZDC. The downside to the single zone is that when any information is needed from the ZDC (such as for a user logon), the information may be across the WAN, instead of local (because the ZDC may be across the WAN). It is this balance that you must consider when designing zones.

Usually, you will end up creating a unique zone for each physical network location within a server farm. However, this does not always have to be the case. You can configure any server to be a member of any zone. Zone boundaries do not have to fall in line with IP subnets or network segments.

The IMA Data Store

Now that we’ve seen how zone data collectors are responsible for tracking and maintaining server information that changes frequently, let’s take a look at the components of a MetaFrame XP server farm that maintain information that does not change frequently. This information is stored in a database called the IMA data store.

The IMA data store is not the same as the zone or the Zone Data Collector. (In fact, the two aren’t related at all.) The IMA data store is an ODBC-compliant database containing persistent configuration information, whereas the zone data collectors contain dynamic information that changes frequently. To view or change configurations stored in the IMA data store, you use the Citrix Management Console. Any information that it displays is pulled from the IMA data store, and when you click the “OK” button after making a change, that information is written to the IMA data store.

All server farm configuration information is saved in the IMA data store, as opposed to being saved on individual MetaFrame XP servers. Whenever a MetaFrame XP server starts up, it contacts the IMA data store and downloads its configuration information. (Actually, this contact occurs whenever the IMA Service is started, which usually coincides with the server starting up, but not always.)

MetaFrame XP servers always know where the data store is because they each have a local file-based DSN called mf20.dsn. This DSN contains the information for connecting to the IMA data store. (More on this file in Chapters 12 and 16.)

After a MetaFrame XP server initially contacts the IMA data store and downloads its configuration information, the server will check for changes every 10 minutes. This default interval can be changed via the following registry key:

Key: HKLM\Software\Citrix\IMA

Value: DCNChangePollingInterval


Data: The interval in milliseconds, entered in hex notation. (The 600 second default would be 600,000 milliseconds, or 0x927C0 in hex.)

Local Host Cache

As previously stated, every MetaFrame XP server downloads its configuration information from the server farm’s IMA data store. Each server is smart enough to only download information from the IMA data store that is relevant to it. Information about other servers is ignored and not downloaded. Once the information is downloaded, it is saved locally in a Microsoft Access-style database known as the “Local Host Cache.” This local host cache is maintained on every MetaFrame XP server. It serves two purposes:

• Increased Redundancy. If communication with the IMA data store is lost, the MetaFrame XP server continues to function for up to 48 hours (96 hours with Feature Release 2) because the vital configuration information it needs is available in its local host cache.

• Increased Speed. The local host cache contains information that the MetaFrame XP server needs to refer to often. By maintaining the information locally, the MetaFrame XP server does not have to access the IMA data store across the network every time any bit of information is needed.

Even though all application publishing information, domain trust rights, and application user rights are retained locally at each MetaFrame XP server in its local host cache, the Zone Data Collector is still contacted whenever a user launches an application. This contact is made so that the ZDC can keep an accurate count of each server’s user load.

You can manually refresh a MetaFrame XP server’s local host cache with the command dsmaint refreshlc. Real world experience shows that this manual refresh is something you’ll do more often than you care to; particularly if you are testing new applications and do not want to wait for the changes to be propagated down to every MetaFrame XP server.

IMA Data Store Database Type

The IMA data store can be in one of four database formats: Microsoft Access (MS Jet), Microsoft SQL Server, Oracle, or IBM DB2. (IBM DB2 requires Feature Release 2 for MetaFrame XP.) The IMA data store works like any standard database, which means that the best performance, reliability, and scalability will be found when SQL Server, Oracle, or DB2 is used. Of course in order to get these benefits you need to address the additional management, hardware, and licensing requirements of these databases.

IMA Data Store Access Mode

Every MetaFrame XP server must belong to a server farm and have access to that farm’s IMA data store. MetaFrame XP is designed to be able to access the data store in either “direct” or “indirect” mode.

When direct mode is used, a MetaFrame XP server connects directly to the database server that is running the IMA data store. A MetaFrame XP server that accesses the IMA data store via indirect mode accesses the database by connecting to another MetaFrame XP server. That other server then forwards the requests directly to the database, and then sends the information back to the original server.

If the IMA data store is a Microsoft Access database, it must be accessed via indirect mode. IMA data stores running on SQL, Oracle, or DB2 platforms can be accessed via direct or indirect mode.

IMA Data Store Replication

MetaFrame XP servers need to have regular communication with the IMA data store to ensure that they always have the current configuration information for the server farm. (Even though we say “regular” communication to describe the communication between a MetaFrame XP server and the IMA data store, keep in mind that this communication is not nearly as frequent as the communication between a MetaFrame XP server and its zone data collector.)

Because of this regular communication, slow network links between MetaFrame XP servers and the IMA data store can cause problems, such as extremely long IMA service start times and timeouts during sequential reads from the data store.

Obviously, this is a situation that should be avoided. One way to avoid this is to split the server farm into multiple, smaller farms. While this would technically solve the problem of MetaFrame XP servers having a slow connection to the data store, it would introduce the complexities and additional management requirements associated with multiple server farms.

Alternately, the IMA data store can be replicated throughout your network environment, so that multiple database copies exist in different locations for various MetaFrame XP servers to access. This data store replication is not a feature of MetaFrame XP; rather, database replication is a native feature of Microsoft SQL Server or Oracle.

Microsoft Access-based data stores do not support replication, because Microsoft Access itself does not support replication. This should not be a problem for you, because if your environment is big enough that you need data store replication, then you shouldn’t be using Microsoft Access to run your data store anyway.

Also, IBM DB2 databases cannot be used for IMA data stores if you plan to replicate the database. This is because MetaFrame XP uses the binary large object data type to store information in DB2 databases, and DB2 does not support the use of that data type for replication.

One of the downsides of database replication in general is that the multiple replicas of the database must stay synchronized with the master copy. This can get to be a problem if many changes occur to the database simultaneously. Fortunately, MetaFrame XP servers only read data from the IMA data store. The only time that information is written to the data store is when a configuration change is made through the Citrix Management Console. Because these changes are manually performed by administrators, there is no risk that too many will occur simultaneously. In fact, in many cases, only one change will be made at a time.

Advantages of Replicating the IMA Data Store

• Increased performance in large server farms.

Disadvantages of Replicating the IMA Data Store

• You need multiple database servers.

• Adding MetaFrame servers to the farm is more complex.

• Only SQL and Oracle data stores can be replicated.

How to Configure IMA Data Store Replication

Detailed step-by-step procedures for replicating an IMA data store can be found in the “MetaFrame XP Advanced Concepts Guide” available for free at Ultimately, you need to configure one database so that it is the “master” copy of the data store. Changes made to the master copy are replicated to read-only copies throughout your environment. All MetaFrame XP servers are then configured, via their local mf20.dsn files, to point to the nearest replica of the data store.

The only caveat to replicating the IMA data store is that in order to make any changes to the farm or publish new applications, you must use a CMC that is connected to the read/write master copy of the data store. The easiest way to do this is to configure the CMC as a published application on one of the servers that connects to the master copy.

More details about configuring the DSN that a MetaFrame XP server uses and publishing the CMC can be found in Chapter 16.

How to Add a New Server to a Replicated Environment

When you add a new MetaFrame XP server to an environment with a replicated data store, you will need to point it to the read/write master copy of the data store during the installation process. Once MetaFrame is completely installed, you can configure the server (as described in Chapter 16) to use a closer read-only copy of the data store.

IMA Data Store Size

Regardless of the mode of access or the database platform, the IMA data store will require approximately 200 KB for each MetaFrame XP server in the farm. This will vary slightly based on the number of applications published, how print drivers are used, and the exact configuration of the farm. The good news is that the database will always be relatively small. Even the largest environments only have data stores that are around 50 MB or so.

Why should you care about IMA data store design?

As you have seen, there are many technical components that you need to understand when designing the IMA data store for your server farm. Based on these components, it’s easy to see that the design of the IMA data store has the potential to impact several areas of your MetaFrame XP environment, including:

• WAN bandwidth.

• IMA service startup times.

• Server farm reliability.

• Scalability.

WAN Bandwidth

Because each MetaFrame XP server needs to read from the IMA data store, your WAN bandwidth can be adversely affected if you do not adequately plan for the location of the data store and its replicas. You want to make sure that you know how and when these database reads will occur.

Speed for the IMA Service to Start

Every time the IMA service is started on a MetaFrame XP server, the IMA data store is queried and the necessary information is downloaded to the MetaFrame XP server’s local host cache. If the nearest copy of the IMA data store is across a slow network link, the IMA service could take a long time to start as it downloads its information. No users can log on until the IMA service is fully started, after the IMA data store read is complete.


If the IMA data store is not available, MetaFrame XP servers default to their local host cached copies. If this occurs, you will not be able to make any configuration changes to the farm or to any published applications. Also, after 48 hours (or 96 hours with MetaFrame XP Service Pack 2) without contact with the IMA data store, local MetaFrame XP servers’ license service will fail and users will not be able to log on. This 48 or 96 hour limit is an artificial cut-off built into the product and cannot be changed.


All servers in a server farm need to contact the single IMA data store. If you want your farm to grow to several hundred servers, you’ll need to scale your IMA data store to support the large number of data requests and consider the location of local data store replicas to service all of those requests.

What are the IMA data store options?

When it comes down to actually designing your data store, there are really two different areas to look at that can be matrixed into three options. For your IMA data store, you can have:

• A Microsoft Access database in indirect mode, accessed through one server.

• SQL, Oracle, or DB2 database in indirect mode, accessed through one server.

• SQL, Oracle, or DB2 database in direct mode, accessed directly via the database server.

If you have existing SQL, Oracle, or DB2 servers in your environment then you can put the IMA data store on them. SQL Servers must be at least SQL Server 7.0 with Service Pack 2, and Oracle servers must be version 7.3.4. In order to use IBM’s DB2 database for your data store, your MetaFrame XP servers must use Feature Release 2, and you must use at least DB2 version 7.2 with FixPak 5.

If you use a database other than Microsoft Access, you should know that the MetaFrame XP installation process cannot automatically create the database for you. You will need to manually create the empty database using the native database tools and specify that database during the MetaFrame XP installation. The installation program will automatically create and configure the tables it needs.

Option 1. Access Database IMA Data Store

Using the Microsoft Access database platform for your IMA data store is cheap and can be run on one of your MetaFrame XP servers. Access-based data stores are designed for small or test environments. The database cannot scale too large, and even if it could, other MetaFrame XP servers must always access the data store through the server that hosts it (via “indirect” mode).

Realize that even though we call this solution a “Microsoft Access” solution, it doesn’t actually mean that you have a copy of Access installed on your MetaFrame XP server. Technically, this solution uses a “Microsoft Jet” database. The drivers and support files needed to read and write Jet databases are included as part of the Windows operating system. Microsoft Access is an application that also just “happens” to use the Jet database format.

Advantages of MS Access-based Data Stores

• Inexpensive.

• No dedicated database hardware. (It can be run on a MetaFrame XP server.)

Disadvantages of MS Access-based Data Stores

• Single point of failure.

• Limited to 50 servers (for performance reasons).

• Slow.

• Data store cannot be replicated.

• The IMA data store often gets corrupted when Access is used.

Option 2. SQL, Oracle, or DB2 via Indirect Mode

A data store hosted by a SQL, Oracle, or DB2 server will be fast and reliable. However, just because the data store is on one of these platforms doesn’t automatically mean that all MetaFrame XP servers in the farm are accessing the database via “direct” mode.

For example, if you have five MetaFrame XP servers that all access a Microsoft Access-based IMA data store via indirect mode hosted on one server, at any time you could convert the database from Microsoft Access to SQL Server. The MetaFrame XP server that previously hosted the Access database would then connect directly to the SQL Server. However, the other four servers would continue to connect to the first server where the database was previously located. Technically, this configuration would work, but you would not realize the full benefits of the SQL Server and would still have a single point of failure.

In this case, you would still be accessing the IMA data store via “indirect mode,” connecting through one MetaFrame XP server. You would lose most of your potential gains over keeping with an Access database. (In this scenario, it would be easy to configure the other MetaFrame XP servers to access the new data store directly. See Chapter 16 for details.)

Advantages of SQL, Oracle, or DB2 via Indirect Mode

• Stable database.

• Scalable database.

• For Oracle and DB2, you wouldn’t have to install any custom database drivers on all your MetaFrame XP servers. (They would only be needed on the server that is accessing the database via direct mode.)

Disadvantages of SQL, Oracle, or DB2 via Indirect Mode

• Single point of failure.

• A database server is needed.

• Performance bottleneck.

Option 3. SQL, Oracle, or DB2 via Direct Mode

An IMA data store running on a Microsoft SQL, Oracle, or DB2 server connected directly to all the MetaFrame XP servers in the farm is really the way to go for an enterprise-wide environment. It will be fast, reliable, and scalable. Also keep in mind that you can configure native database replication for SQL or Oracle so that there is a local copy of the IMA data store near each group of MetaFrame XP servers. The only downside to using SQL, Oracle, or DB2 via direct mode is the potential cost associated with the database software and servers. However, if you’re using this type of data store and you have MetaFrame XP servers through your enterprise, a few thousand dollars spent to ensure that the IMA data store is done right is money well spent.

Advantages of SQL or Oracle in Direct Mode

• Quick, reliable access.

• Stable database.

• Scalable database.

• No single point of failure.

• For SQL or Oracle, database replication keeps copies of the database near MetaFrame XP servers.

Disadvantages of SQL or Oracle in Direct Mode

• A database server is needed.

• One more server to purchase.

• One more server to manage.

Considerations when Designing your Data Store

Because there are very different designs and strategies for an IMA data store, it’s important that you determine which is right for you. Answering the following questions about your environment will help to make your decision as to what IMA data store options you should use:

• What is the WAN distribution of your MetaFrame XP servers?

• How many MetaFrame XP servers will there be?

• How crucial are the applications running on the MetaFrame XP servers?

• What is the budget?

• Is there already a database server that you can use?

• What is your tolerance for pain?

WAN Distribution of Servers

If your MetaFrame XP servers in a single server farm are located on opposite sides of a WAN, then you should consider database replication. This means that you would have to use SQL or Oracle for your data store. With servers in one location, database replication becomes less important (and your choice of database platform becomes less important).

Number of MetaFrame XP Servers

A Microsoft Access-based data store can support up to about 50 servers before performance bottlenecks would dictate moving it to a real database (i.e. SQL, Oracle, or DB2). If your MetaFrame XP environment will just be a handful of servers, then you can get away with choosing Microsoft Access for your data store platform.

Importance of your MetaFrame XP Environment

If MetaFrame XP is not mission critical in your environment, then it’s probably not worth spending the extra money to build a dedicated SQL, Oracle, or DB2 server to host the IMA data store. On the other hand, if your applications are mission critical, you probably don’t want to risk the single point of failure nature of a Microsoft Access-based data store.


If you do not have the budget for an IMA data store then your design is simple: use MS Access. The Microsoft Access data store option is free. It requires no additional hardware or software. Then again, if you have the money, you should opt for the more reliable and efficient solution.

Existing Database Server

If you are lucky enough to work in a large environment there may already be a SQL, Oracle, or DB2 database server that you can use for your IMA data store. If you are really lucky, someone else might be responsible for administering that server, which would be one less thing for you to worry about.

Pain Tolerance

There have been numerous problems with IMA data stores based on Microsoft Access. Many administrators spend time repairing and restoring their data stores that are Microsoft Access-based. This is most likely due in part to bugs in the MetaFrame XP software and part to the fact that Microsoft Access is a desktop database. It is much more suited to track kitchen recipes than it is enterprise data stores. If you want the “real” solution then you need to use SQL Server, Oracle, or DB2.

Future MetaFrame XP Server Relocation

Perhaps one of the most important properties of any technical design is knowing that it is not static and that changes will always need to be made. To that end, with MetaFrame XP it is possible (at any time) to move MetaFrame XP servers in and out of any farm, to move them to a different or new zone, or to point them to a new IMA data store location.

The movement and reconfiguration of MetaFrame XP servers is not tied to any particular MetaFrame XP network architecture. This means that you can design your MetaFrame XP network architecture for your environment as it stands today and then you can modify it in the future as your requirements and environment change and grow. We’ll look at the specifics of how to move and reconfigure servers in Chapter 16.

Now that we’ve looked at the components that go into the creation of your MetaFrame XP network architecture design, let’s take a look at a real world case study (beginning on the next page). In this case, we’ll look at the many designs that an international toy company considered for their MetaFrame XP architecture, and the design that they finally chose.

Real World Case Study

Lilydink Toy Company

The Lilydink Toy Company has decided to create a unified MetaFrame XP strategy for their entire enterprise. They currently have multiple pockets of users that use applications in multiple locations. Figure 3.13 shows their current business environment.

Figure 3.13 The Lilydink Toy Company’s business environment

All users in the remote sites need to access the corporate database application at the headquarters. Additionally, remote users will need to access applications at their respective remote sites.

After studying their environment, there were some technical design decisions that the project team could easily make. Lilydink decided that they would place MetaFrame XP application servers throughout their environment instead of placing them all at the headquarters. They figured that by doing this the users’ sessions would always execute in close proximity to their data. Also, they didn’t like the idea of application data from the remote sites traveling across the network to central MetaFrame XP servers, just to be sent right back to the remote user via an ICA session.

They next decided that they would have a local IMA data store at each physical site. If each site was its own server farm, then this would be a given. However, if they have a single, large server farm, they will place local data store replicas at each remote site that has MetaFrame XP servers.

Even with these preliminary questions answered, The Lilydink Toy Company still wasn’t sure what their server farm architecture should look like. After some lengthy discussions, they boiled it down to two questions:

• Should they create one large server farm or a separate server farm for each location?

• If they create a large server farm, should they partition it into separate zones or just have one large zone?

Lilydink decided to map out all possible solutions based on answers to these two questions. They came up with three different architectures worth considering:

• One company-wide server farm with one zone.

• One company-wide server farm with multiple zones.

• Multiple server farms.

The sections on the following pages compare three MetaFrame XP network architectures. While no architecture represents the “perfect” solution, each has very specific advantages and disadvantages. Most likely, a combination of architectures will work best for the Lilydink Toy Company.

Option 1. One Company-Wide Farm with One Zone

The first company-wide MetaFrame XP architecture that Lilydink considered was the creation of one large server farm not split into separate server zones. In this scenario, there would only be one zone data collector for the entire company. All session update information would traverse the WAN to the single zone data collector.

Lilydink’s IT staff created architectural diagrams to represent where the different MetaFrame XP components would be and to get a visual feel for the amount of network traffic between sites. Their first diagram is shown in Figure 3.14 on the next page.

Figure 3.14 One large server farm with a single large zone

The first thing that you may notice when looking at this layout is that there is quite a bit of network traffic between remote sites. In addition to users’ ICA traffic from their application sessions, every remote MetaFrame XP server creates a connection back to the zone data collector at the headquarters.

The design team was also concerned that having a zone data collector on the opposite side of a busy WAN link could frustrate remote users if they try to logon during busy periods.

The Lilydink project team created the follow list of advantages and disadvantages for this architecture:

Advantages of One Large Farm

• Licenses are pooled across all sites. Each remote user will only need one license, even though they will access applications on local and remote MetaFrame XP servers.

• Simple maintenance and administration, because all servers will be in the same server farm.

• All session update information is sent across the WAN once, to the zone data collector at the headquarters.

Disadvantages of One Large Farm

• Even though it happens only once, all session update information must be sent across the WAN to the zone data collector at the headquarters.

• Logons, queries, and enumerations could be slow for remote users, because the zone data collector is across the WAN.

• Firewall ports 2512 and 2513 must be opened to allow intra-farm communication if the WAN links connect through firewalls. (See Chapter 15 for more information on MetaFrame XP port usage.)

• Centralized administration only. Because all servers belong to the same server farm, remote sites cannot have their own administrators.

• Session information updates from each MetaFrame XP server to the zone data collector cannot be configured, timed, parsed, queued, or limited (unlike ICA Gateway traffic in MetaFrame 1.8).

Option 2. One Company-Wide Farm with Multiple Zones

Instead of having one zone, the Lilydink project team decided that another architecture option was to create one company-wide server farm with separate zones for each geographic location. This would allow them to have a zone data collector at each local site.

Figure 3.15 One farm, multiple zones

Even after a brief look at the architecture diagram (see Figure 3.15), it’s easy to see that WAN network traffic could be less than the first design. The main difference here is that any and all session change information is passed from the first zone data collector to all of the others. For example, whenever a user logs on to a MetaFrame XP server, that server informs the zone data collector. If there is only one zone, no other communication takes place. Even if the zone data collector is on the other side of a WAN link, the update only crossed the link once. However, if there were many zones, the zone data collector that received the information from the MetaFrame XP server would immediately notify each and every one of the other zone data collectors. In a WAN environment, such as the illustrated environment of the Lilydink Toy Company, those immediate zone updates would travel across the WAN repeatedly—once for each zone data collector.

Advantages of One Farm with Multiple Zones

• Licenses are pooled across all sites. Each remote user will only need one license even though they will access applications on local and remote MetaFrame XP servers.

• Simple maintenance and administration, because all servers will be in the same server farm.

• Local zone data collectors at each site allow for fast logons, queries, and application enumerations.

Disadvantages of One Farm with Multiple Zones

• All background information is replicated to all zone data collectors. (Remember, this additional network load is the price you pay for having local, up-to-date information at each site.)

• Firewall ports 2512 and 2513 must be opened to allow intra-farm communication if the WAN links connect through firewalls.

• Centralized administration only. Because all servers belong to the same server farm, remote sites cannot have their own administrators.

• All zone data collectors need direct network access to all other zone data collectors.

Option 3. Multiple Server Farms

Not sure if the WAN bandwidth could handle the traffic generated by a MetaFrame XP server farm, Lilydink decided to consider the option of creating multiple server farms, one for each geographically separate location. With this design, each farm is essentially its own entity. While this design has virtually zero impact to the WAN, it also is the most difficult to work with on a daily basis.

Figure 3.16 Multiple farms

As you can see in the diagram (Figure 3.16), the only traffic that traverses the WAN in this situation is the actual ICA session traffic from the remote users accessing applications from the headquarters. Of course, this architecture is not without its problems, most notably the inability to share MetaFrame XP connection licenses between locations.

Advantages of Multiple Server Farms

• No MetaFrame XP background IMA data replication.

• Departmental-based licensing.

• Each farm could have separate, local administrators.

• No IMA data store replication, since each farm has its own IMA data store.

Disadvantages of Multiple Server Farms

• Licenses are not pooled across different geographic regions, causing each remote user that accesses corporate and local applications to consume two connection licenses.

• All security must be configured independently at each farm.

• All farms must be administered separately. While there are ways to configure the Citrix Management Console to simultaneously show both server farms (see Chapter 16), farm administrators must manually make changes to the configuration of each farm separately.

Analysis of the Proposed Worldwide Architectures

After looking at the advantages and disadvantages of the three solutions outlined, the Lilydink IT staff decided that their MetaFrame XP company-wide architecture would be a combination of the three. In some places it would be logical to conserve bandwidth while in others it would be important to share licenses and management via common server farms.

The main advantage to having one giant worldwide farm is that one user will only need one license, regardless of the location or the number of servers that he accesses. With the abilities to replicate the IMA data store locally among sites and to segment the server farm into multiple zones, it is technically possible to build one large farm that spans multiple WAN locations.

However, if localized administration is important, having one giant farm can be a problem. Users are granted server farm administrative rights via the Citrix Management Console. These rights allow a user to access and change the IMA Data Store (which contains all configuration information about all farm servers). Server farm administration rights are “all or nothing.” There is no way to segment a server farm into multiple administrative groups (unless you split the farm into multiple farms). Even if native Microsoft security was used to “lock-down” a server (with NTFS rights, policies, etc.), all server farm administrators would still be able to change that server’s information in the IMA data store. This would be like revoking someone’s local administrative rights from a server but then giving them full control of the registry (the IMA data store in this case). This lack of administrative delegation ability within one server farm is a disadvantage to the many advantages of having one unified, global farm. (Note that not even Feature Release 2’s delegated administration helps in this case, since it delegates administration by role, not by server.)

In addition to the major farm design, another decision must be made as to the numbers and locations of server zones. Most likely, organizations will want to split any farm that traverses geographic locations into multiple zones. This is primarily due to the fact that all servers in one zone need constant access to the zone data collector. Additionally, having local zones will always ensure that a zone data collector resides on the same subnet as a MetaFrame XP server that a user would like to use.

Ultimately, the Lilydink Toy Company will create multiple zones, some spanning physical sites, others not. For example, several remote sites may have MetaFrame XP servers that all connect into Europe. It is possible that all of those remote sites would be one zone, while the US sites would be in another zone.

about the author

Brian Madden is a freelance consultant based in Washington DC. He focuses on ubiquitous computing—helping customers provide access to critical applications for users regardless of client device, platform, location, or network connection. He is the author of two bestselling books about Citrix, most recently Citrix MetaFrame XP: Advanced Technical Design Guide, Including Feature Release 2.
Brian Madden - Citrix MetaFrame XP Network Design.pdf

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

This message was originally posted by Kim Allen Rock on October 20, 2004
I enjoyed reading this white paper. It was technical enough to work from and written well enough to comprehend the information. I will be looking for Brian's books. He seems to have a real grasp on Citrix XP. Thanks, Brian
This message was originally posted by an anonymous visitor on November 30, 2004
Great information - keep up the great work. I happened across this because I'm looking for information about the DCNChangePollingInterval registry value. You state above that the servers will contact the Datastore at 10 minute intervals for changes. The XP FR3 Advanced Concepts guide shows that the interval is 30 minutes. FYI. Thanks.
This message was originally posted by Brian Madden on December 4, 2004
Yeah, this was written for FR-2. Citrix changed the default for FR-3. The important thing here is that you can change this to whatever you want.
Whatever may be ...the person who made available in this site we should thankfull for that.


This message was originally posted by Brian Madden on December 4, 2004
Yeah, this was written for FR-2. Citrix changed the default for FR-3. The important thing here is that you can change this to whatever you want.

May I ask how it's possible to change these communication times between Metaframe XP Servers and Datastore?