• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer navigation

The Geek Diary

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • VCS
  • Interview Questions
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
  • DevOps
    • Docker
    • Shell Scripting
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

CentOS / RHEL 7 : How to configure Network Bonding or NIC teaming

by admin

Network interface bonding is called by many names: Port Trunking, Channel Bonding, Link Aggregation, NIC teaming, and others. It combines or aggregates multiple network connections into a single channel bonding interface. This allows two or more network interfaces to act as one, to increase throughput and to provide redundancy or failover.

The Linux kernel comes with the bonding driver for aggregating multiple physical network interfaces into a single logical interface (for example, aggregating eth0 and eth1 into bond0). For each bonded interface you can define the mode and the link monitoring options. There are seven different mode options, each providing specific load balancing and fault tolerance characteristics as shown in the table below.

Bonding modes

Depending on your requirement, you can set the bonding mode to any of the below 7 modes.

Mode Policy How it works Fault Tolerance Load balancing
0 Round Robin packets are sequentially transmitted/received through each interfaces one by one. No Yes
1 Active Backup one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments. Yes No
2 XOR [exclusive OR] In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC. Yes Yes
3 Broadcast All transmissions are sent on all slaves Yes No
4 Dynamic Link Aggregation aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad. Yes Yes
5 Transmit Load Balancing (TLB) The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. Yes Yes
6 Adaptive Load Balancing (ALB) Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation. Yes Yes

Network Bonding Link Monitoring

The bonding driver supports two methods to monitor a slave’s link state:

  • MII (Media Independent Interface) monitor
    • This is the default, and recommended, link monitoring option.
    • It monitors the carrier state of the local network interface.
    • You can specify the monitoring frequency and the delay.
    • Delay times allow you to account for switch initialization.
  • ARP monitor
    • This sends ARP queries to peer systems on the network and uses the response as an indication that the link is up.
    • You can specify the monitoring frequency and target addresses.

    Network Bonding: Configuration

    Creating a Bonding Interface File

    You can manually create a bonding interface file in the /etc/sysconfig/network-scripts directory. You first create the bonding interface and then you add the physical network interfaces to the bond. These physical network interfaces are called “slaves“.

    For the example in this post, the slaves for the bond0 interface are ens33 and ens37. Before starting up, make sure the bonding module is loaded properly. To verify that, use the command shown below :

    # lsmod |grep bonding
    bonding               122351  0

    If the module is not loaded, load it using modprobe command.

    # modprobe bonding

    1. The following is an example of a bonding interface file:

    # cat /etc/sysconfig/network-scripts/ifcfg-bond0
    DEVICE=bond0
    BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup" TYPE=Bond
    BONDING_MASTER=yes
    BOOTPROTO=none
    IPADDR=192.168.2.12
    PREFIX=24
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=bond0
    UUID=bbe539aa-5042-4d28-a0e6-2a4d4f5dd744
    ONBOOT=yes

    2. The following example defines the ens33 physical network interface as a slave for bond0:

    # cat /etc/sysconfig/network-scripts/ifcfg-ens33 
    TYPE=Ethernet
    NAME=ens33
    UUID=817e285b-60f0-42d8-b259-4b62e21d823d
    DEVICE=ens33
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes

    3. The following example defines the ens37 physical network interface as a slave for bond0:

    # cat /etc/sysconfig/network-scripts/ifcfg-ens37 
    TYPE=Ethernet
    NAME=ens37
    UUID=f0c23472-1aec-4e84-8f1b-be8a2ecbeade
    DEVICE=ens37
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes

    4. Restart the network services
    Restart the network services to enable the bonding interface.

    # systemctl restart network

    In case if you do not want to restart the network service, you can plumb the bonding interface individually :

    # ifup bond0

    Verify the network bonding configuration

    1. Check the new interface in the ‘ip addr show’ command output :

    # ip addr show
    1: lo: [LOOPBACK,UP,LOWER_UP] mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens33: [BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP] mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 00:0c:29:54:f7:20 brd ff:ff:ff:ff:ff:ff
    4: ens37: [BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP] mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
        link/ether 00:0c:29:54:f7:20 brd ff:ff:ff:ff:ff:ff
    5: bond0: [BROADCAST,MULTICAST,MASTER,UP,LOWER_UP] mtu 1500 qdisc noqueue state UP qlen 1000
        link/ether 00:0c:29:54:f7:20 brd ff:ff:ff:ff:ff:ff
        inet 192.168.2.12/24 brd 192.168.2.255 scope global bond0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe54:f720/64 scope link 
           valid_lft forever preferred_lft forever

    2. Also verify current status of bonding interfaces and which interface is currently active, using the command below:

    # cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
    
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: ens33
    MII Status: up
    MII Polling Interval (ms): 1
    Up Delay (ms): 0
    Down Delay (ms): 0
    
    Slave Interface: ens33
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:0c:29:54:f7:20
    Slave queue ID: 0
    
    Slave Interface: ens37
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:0c:29:54:f7:34
    Slave queue ID: 0

    From the command output above, we can see that ens33 is the currently active slave in the bond.

    Testing fault tolerance of the bonding configuration

    1. As this is an active-backup bonding configuration, when one interface goes down, the other interface in the bond becomes the active slave. To verify this fuctionality, we will bring down the current interface ens33.

    # ifdown ens33

    2. If you again check the bond interface status, you will find that the new active slave is the interface ens37.

    # cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
    
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: ens37
    MII Status: up
    MII Polling Interval (ms): 1
    Up Delay (ms): 0
    Down Delay (ms): 0
    
    Slave Interface: ens37
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:0c:29:54:f7:34
    Slave queue ID: 0

    Also the bond interface will be up and running:

    # ip add show bond0
    5: bond0: [BROADCAST,MULTICAST,MASTER,UP,LOWER_UP] mtu 1500 qdisc noqueue state UP qlen 1000
        link/ether 00:0c:29:54:f7:20 brd ff:ff:ff:ff:ff:ff
        inet 192.168.2.12/24 brd 192.168.2.255 scope global bond0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe54:f720/64 scope link 
           valid_lft forever preferred_lft forever
    CentOS / RHEL 7 : How to create an Interface Bonding using nmcli

    Filed Under: CentOS/RHEL 7, Linux

    Some more articles you might also be interested in …

    1. cpulimit: command not found
    2. coredumpctl Command Examples in Linux
    3. iwconfig: command not found
    4. How to disable or enable an HBA without reboot under CentOS/RHEL
    5. enum4linux: command not found
    6. libreoffice: command not found
    7. UNIX / Linux : Send mail with attachment using mutt
    8. CentOS / RHEL 7 : How to enable telnet for a group of users
    9. Linux / UNIX : How to create extended partition using fdisk
    10. rbash – Set Restricted shell in Linux

    You May Also Like

    Primary Sidebar

    Recent Posts

    • qm Command Examples in Linux
    • qm wait Command Examples in Linux
    • qm start Command Examples in Linux
    • qm snapshot Command Examples in Linux

    © 2023 · The Geek Diary

    • Archives
    • Contact Us
    • Copyright