HP ProLiant Network Adapter Teaming Explained

HP ProLiant Network Adapter Teaming is a software-based technology used to increase a server’s network availability and performance. HP ProLiant Network Adapter Teaming provides network adapter, network port, network cable, switch, and communication path fault recovery in addition to transmit and receive load balancing technology.

HP ProLiant Network Adapter Teaming is a software-based technology used to increase a server’s network availability and performance. HP ProLiant Network Adapter Teaming provides network adapter, network port, network cable, switch, and communication path fault recovery in addition to transmit and receive load balancing technology.

The objective of HP ProLiant Network Adapter Teaming is to provide network fault tolerance and load balancing. These two objectives are accomplished by teaming together two or more server network adapter ports. The term “team” refers to the concept of multiple server network adapters from the same server working together as a single server network adapter.

Types of Teaming

When deciding which teaming configuration best suits your needs, you’ll need to consider your technical needs as well as the available network infrastructure. Basically there are four types of teaming configurations:

  • Network Fault Tolerance
  • Transmit Load Balancing with Fault Tolerance
  • Switch assisted Load Balancing with Fault Tolerance
  • Switch assisted Dual Channel Load Balancing

Network Fault Tolerance
When using network fault tolerance (NFT), between two and eight physical NICs are teamed together to operate as a single virtual network adapter. Only one teamed port (the Primary teamed port) is used for both transmit and receive communication with the server. The remaining adapters are considered to be stand-by (or secondary adapters) and are referred to as Non-Primary teamed ports. Non-Primary teamed ports remain idle unless the Primary teamed port fails. All teamed ports may transmit and receive heartbeats, including Non-Primary adapters.

Transmit Load Balancing with Fault Tolerance
Transmit Load Balancing mode incorporates all the features of NFT, plus Transmit Load Balancing. In this mode, two to eight adapters may be teamed together to function as a single virtual network adapter. The load-balancing algorithm used in TLB allows the server to load balance traffic transmitted from the server. However, traffic received by the server is not load balanced, meaning the Primary Adapter is responsible for receiving all traffic destined for the server. In addition, only IP traffic is load balanced.

Switch assisted Load Balancing with Fault Tolerance
Switch-assisted Load Balancing mode incorporates all the features of NFT and TLB modes, but it adds the feature of load balancing incoming traffic. In this mode, two to eight adapters may be teamed together as a single virtual network adapter. The load-balancing algorithm used in SLB allows for the load balancing of the server’s transmit and receive traffic. Unlike TLB which only load balances IP traffic, SLB load balances all traffic regardless of the Protocol. Switch-assisted Load Balancing (SLB) is an HP term that refers to an industry standard technology for grouping multiple network adapters into one virtual network adapter and multiple switch ports one virtual switch port. HP’s SLB technology works with multiple switch vendors’ technologies. Switch-assisted Load Balancing (SLB) is not the same thing as Server Load Balancing (SLB) as used by some switch vendors. Switch-assisted Load Balancing operates independently of, and in conjunction with, Server Load Balancing.

Let’s take a closer look at these various modes.

Network Fault Tolerance (NFT)

There are three operating modes available for NFT Teams: Manual, Fail On Fault, and Preferred Primary.

Manual Mode
This mode for NFT is used for user-initiated failovers. When set, manual mode allows an NFT Team to automatically failover during events that normally cause a failover, however, Manual mode also allows the Team to manually failover with the click of a button. Manual mode is normally used for troubleshooting.

Fail On Fault Mode
The second mode available for NFT is Fail On Fault. In this mode, an NFT Team will initiate a failover from the Primary Adapter to an operational Non-Primary Adapter whenever a failover event occurs on the Primary Adapter. When the failover occurs, the two adapters swap MAC addresses so the Team remains known to the network by the same MAC address. The new Primary Adapter is considered just as functional as the old Primary Adapter. If the old Primary Adapter is restored, it becomes a Non-Primary Adapter for the Team but no MAC address changes are made unless there is another failover event on the Primary Adapter.

Preferred Primary Mode
The last mode available for NFT is Preferred Primary mode. When choosing Preferred Primary mode, the operator is presented with a drop down box to select the “Preferred Primary Adapter”. The operator should choose the adapter that, for a particular reason, is best suited to be the Primary Adapter. When an adapter is chosen as the Preferred Primary Adapter, it will be used as the Primary Adapter whenever it is in an operational state. If the Preferred Primary Adapter experiences a failover event, the NFT Team fails over to a Non-Primary Adapter. If the Preferred Primary Adapter is restored, the Team will then initiate a failback to the Preferred Primary Adapter.

Transmit Load Balancing (TLB)

With TLB, the recovery mechanism provided is very similar to the NFT failover mode TLB and network discussed in section titled, “Fail On Fault”. In a two port TLB Team, the primary adapter receives all data frames, while the Non-Primary Adapter receives only heartbeat frames. Both adapters are capable of transmitting data frames.

In the event of a failover, the Non-Primary Adapter becomes the Primary Adapter and assumes the MAC address of the Team. In effect, the two adapters swap MAC addresses. In addition, only IP traffic is load balanced. Also, with TLB, traffic received by the server is not load balanced.
 
Switch Assisted Load Balancing (SLB)

All members of the SLB Team transmit and receive frames with the same MAC Address. Also, there is no concept of a Primary or Non-Primary Adapter as there is in NFT and TLB. With SLB, there are no heartbeat frames, and consequently no heartbeat failovers. In a two-port SLB Team, all members are capable of receiving data frames (based on the switch’s load balancing algorithm), and transmitting data frames (based on the Teaming Driver’s load balancing algorithm).

After a failover event in a two-port Team, only one adapter is currently working, so all transmit traffic is sent using it. All receive traffic is determined by the switch, which should detect that only one adapter is working. If a failed adapter is restored, all transmit and receive traffic is once again load balanced among all adapters. All receive traffic is still determined by the switch algorithm which should detect that both adapters are functional. If the switch sends a frame destined for the Team MAC address to any of the "operational" adapters in the Team, the adapter will receive it.

The HP Network Adapter Teaming driver does not control frames received, but only load balances the transmit traffic. With SLB, all protocols are load balanced, not just IP. To use SLB, you’ll need a switch that supports some form of port trunking. SLB does not support switch redundancy since all ports in a team must be connected to the same switch. Also, SLB does not support any kind of port trunking auto configuraton protocols. If automatic port trunking is required, 802.3ad Dynamic team type should be used with an IEEE 802.3ad Dynamic capable switch.

What's this mean for your SBC environment?

In an ideal world, your terminal servers would be equipped with at least two gigabit network adapters. The network infrastructure design shoudl be well thought through. Teaming seems easy enough, but as you’ve read there are quite a few things to take into consideration. However, sometimes teaming isn’t the best solution for your terminal servers. Remember, all this teaming seems really cool, but why use it in the first place? For network fault tolerance and load balancing? Isn’t that also what your terminal servers or Citrix Presentation Servers were meant to provide?

First of all, realize that your server based-computing environment should probably be viewed as a single large, fault tolerant, redundant, uniform user front office. Microsoft offers basic NLB (with some drawbacks I must admit), and Citrix offers true load balancing. This covers load balancing your users in a rudimentary form with no switch configuration required. The challenge is now pretty much trying to use the networking architecture in the most effective and efficient way.

Why not configure both network adapters separately? Use one network adapter to facilitate your users using RDP or ICA and the other network adapter to provide back office traffic. This way you can configure security settings on RDP or ICA on a per-network adapter basis. The second network could be configured with proper DNS suffixes to facilitate your name resolution and back office traffic. All file transfers, user profile loading and unloading, policy deployments, and so on would only use one network adapter. By functionally splitting your terminal server in half, you now have two network adapters to use to the fullest. This means protocol security per network adapter and protocol, performance tuning on a per network adapter and protocol basis, and performance monitoring on a per-network adapter basis. No more “why is my session sometimes freezing up?” Pinpoint the cause of your problem more accurately and work more efficiently and effectively.

39 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close