Network interface bonding is called by many names: Port Trunking, Channel Bonding, Link Aggregation, NIC teaming, and others. It combines or aggregates multiple network connections into a single channel bonding interface. This allows two or more network interfaces to act as one, to increase throughput and to provide redundancy or failover.
The Linux kernel comes with the bonding driver for aggregating multiple physical network interfaces into a single logical interface (for example, aggregating eth0 and eth1 into bond0). For each bonded interface you can define the mode and the link monitoring options. There are seven different mode options, each providing specific load balancing and fault tolerance characteristics.
Network Bonding modes
The following bonding policy modes are available:
See the /usr/share/doc/iputils-*/README.bonding file for complete descriptions of the available bonding policy modes. The tbale below gives the summary and comparison of the Network Bonding modes.
Mode | Policy | How it works | Fault Tolerance | Load balancing |
---|---|---|---|---|
0 | Round Robin | packets are sequentially transmitted/received through each interfaces one by one. | No | Yes |
1 | Active Backup | one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments. | Yes | No |
2 | XOR [exclusive OR] | In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC. | Yes | Yes |
3 | Broadcast | All transmissions are sent on all slaves | Yes | No |
4 | Dynamic Link Aggregation | aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad. | Yes | Yes |
5 | Transmit Load Balancing (TLB) | The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. | Yes | Yes |
6 | Adaptive Load Balancing (ALB) | Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation. | Yes | Yes |
Network Bonding Link Monitoring
The bonding driver supports two methods to monitor a slave’s link state:
MII (Media Independent Interface) Monitor
This is the default link monitoring option. This method monitors only the carrier state of the local network interface. It relies on the device driver for carrier state information, or queries the MII registers directly, or uses ethtool to attempt to obtain carrier state. You can specify the following information for MII monitoring:
- Monitoring frequency: The time in milliseconds between querying carrier state
- Link up delay: The time in milliseconds to wait before using a link that is up
- Link down delay: The time in milliseconds to wait before switching to another link when the active link is reported as down
ARP Monitor
This method of link monitoring sends APR queries to peer systems on the network and uses the response as an indication that the link is up. The ARP monitor relies on the device driver to keep the last receive time, and the transmit start time, updated. If the device driver is not updating these times, the ARP monitor fails any slaves that use that device driver. You can specify the following information for APR monitoring:
- Monitoring frequency: The time in milliseconds that ARP queries are sent
- ARP targets: A comma-separated list of IP addresses that ARP queries are sent to