2 Linux Ethernet Bonding Driver mini-howto
4 Initial release : Thomas Davis <tadavis at lbl.gov>
5 Corrections, HA extensions : 2000/10/03-15 :
6 - Willy Tarreau <willy at meta-x.org>
7 - Constantine Gavrilov <const-g at xpert.com>
8 - Chad N. Tindel <ctindel at ieee dot org>
9 - Janice Girouard <girouard at us dot ibm dot com>
10 - Jay Vosburgh <fubar at us dot ibm dot com>
14 The bonding driver originally came from Donald Becker's beowulf patches for
15 kernel 2.0. It has changed quite a bit since, and the original tools from
16 extreme-linux and beowulf sites will not work with this version of the driver.
18 For new versions of the driver, patches for older kernels and the updated
19 userspace tools, please follow the links at the end of this file.
28 Configuring Multiple Bonds
30 Verifying Bond Configuration
31 Frequently Asked Questions
33 Promiscuous Sniffing notes
42 1) Build kernel with the bonding driver
43 ---------------------------------------
44 For the latest version of the bonding driver, use kernel 2.4.12 or above
45 (otherwise you will need to apply a patch).
47 Configure kernel with `make menuconfig/xconfig/config', and select "Bonding
48 driver support" in the "Network device support" section. It is recommended
49 to configure the driver as module since it is currently the only way to
50 pass parameters to the driver and configure more than one bonding device.
52 Build and install the new kernel and modules.
54 2) Get and install the userspace tools
55 --------------------------------------
56 This version of the bonding driver requires updated ifenslave program. The
57 original one from extreme-linux and beowulf will not work. Kernels 2.4.12
58 and above include the updated version of ifenslave.c in
59 Documentation/networking directory. For older kernels, please follow the
60 links at the end of this file.
62 IMPORTANT!!! If you are running on Redhat 7.1 or greater, you need
63 to be careful because /usr/include/linux is no longer a symbolic link
64 to /usr/src/linux/include/linux. If you build ifenslave while this is
65 true, ifenslave will appear to succeed but your bond won't work. The purpose
66 of the -I option on the ifenslave compile line is to make sure it uses
67 /usr/src/linux/include/linux/if_bonding.h instead of the version from
70 To install ifenslave.c, do:
71 # gcc -Wall -Wstrict-prototypes -O -I/usr/src/linux/include ifenslave.c -o ifenslave
72 # cp ifenslave /sbin/ifenslave
78 You will need to add at least the following line to /etc/modprobe.conf
79 so the bonding driver will automatically load when the bond0 interface is
80 configured. Refer to the modprobe.conf manual page for specific modprobe.conf
81 syntax details. The Module Parameters section of this document describes each
82 bonding driver parameter.
86 Use standard distribution techniques to define the bond0 network interface. For
87 example, on modern Red Hat distributions, create an ifcfg-bond0 file in
88 the /etc/sysconfig/network-scripts directory that resembles the following:
94 BROADCAST=192.168.1.255
99 (use appropriate values for your network above)
101 All interfaces that are part of a bond should have SLAVE and MASTER
102 definitions. For example, in the case of Red Hat, if you wish to make eth0 and
103 eth1 a part of the bonding interface bond0, their config files (ifcfg-eth0 and
104 ifcfg-eth1) should resemble the following:
113 Use DEVICE=eth1 in the ifcfg-eth1 config file. If you configure a second
114 bonding interface (bond1), use MASTER=bond1 in the config file to make the
115 network interface be a slave of bond1.
117 Restart the networking subsystem or just bring up the bonding device if your
118 administration tools allow it. Otherwise, reboot. On Red Hat distros you can
119 issue `ifup bond0' or `/etc/rc.d/init.d/network restart'.
121 If the administration tools of your distribution do not support
122 master/slave notation in configuring network interfaces, you will need to
123 manually configure the bonding device with the following commands:
125 # /sbin/ifconfig bond0 192.168.1.1 netmask 255.255.255.0 \
126 broadcast 192.168.1.255 up
128 # /sbin/ifenslave bond0 eth0
129 # /sbin/ifenslave bond0 eth1
131 (use appropriate values for your network above)
133 You can then create a script containing these commands and place it in the
134 appropriate rc directory.
136 If you specifically need all network drivers loaded before the bonding driver,
137 adding the following line to modprobe.conf will cause the network driver for
138 eth0 and eth1 to be loaded before the bonding driver.
140 install bond0 /sbin/modprobe -a eth0 eth1 && /sbin/modprobe bonding
142 Be careful not to reference bond0 itself at the end of the line, or modprobe
143 will die in an endless recursive loop.
145 If running SNMP agents, the bonding driver should be loaded before any network
146 drivers participating in a bond. This requirement is due to the the interface
147 index (ipAdEntIfIndex) being associated to the first interface found with a
148 given IP address. That is, there is only one ipAdEntIfIndex for each IP
149 address. For example, if eth0 and eth1 are slaves of bond0 and the driver for
150 eth0 is loaded before the bonding driver, the interface for the IP address
151 will be associated with the eth0 interface. This configuration is shown below,
152 the IP address 192.168.1.1 has an interface index of 2 which indexes to eth0
153 in the ifDescr table (ifDescr.2).
155 interfaces.ifTable.ifEntry.ifDescr.1 = lo
156 interfaces.ifTable.ifEntry.ifDescr.2 = eth0
157 interfaces.ifTable.ifEntry.ifDescr.3 = eth1
158 interfaces.ifTable.ifEntry.ifDescr.4 = eth2
159 interfaces.ifTable.ifEntry.ifDescr.5 = eth3
160 interfaces.ifTable.ifEntry.ifDescr.6 = bond0
161 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.10.10.10.10 = 5
162 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.192.168.1.1 = 2
163 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.10.74.20.94 = 4
164 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.127.0.0.1 = 1
166 This problem is avoided by loading the bonding driver before any network
167 drivers participating in a bond. Below is an example of loading the bonding
168 driver first, the IP address 192.168.1.1 is correctly associated with
171 interfaces.ifTable.ifEntry.ifDescr.1 = lo
172 interfaces.ifTable.ifEntry.ifDescr.2 = bond0
173 interfaces.ifTable.ifEntry.ifDescr.3 = eth0
174 interfaces.ifTable.ifEntry.ifDescr.4 = eth1
175 interfaces.ifTable.ifEntry.ifDescr.5 = eth2
176 interfaces.ifTable.ifEntry.ifDescr.6 = eth3
177 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.10.10.10.10 = 6
178 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.192.168.1.1 = 2
179 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.10.74.20.94 = 5
180 ip.ipAddrTable.ipAddrEntry.ipAdEntIfIndex.127.0.0.1 = 1
182 While some distributions may not report the interface name in ifDescr,
183 the association between the IP address and IfIndex remains and SNMP
184 functions such as Interface_Scan_Next will report that association.
190 Optional parameters for the bonding driver can be supplied as command line
191 arguments to the insmod command. Typically, these parameters are specified in
192 the file /etc/modprobe.conf (see the manual page for modprobe.conf). The
193 available bonding driver parameters are listed below. If a parameter is not
194 specified the default value is used. When initially configuring a bond, it
195 is recommended "tail -f /var/log/messages" be run in a separate window to
196 watch for bonding driver error messages.
198 It is critical that either the miimon or arp_interval and arp_ip_target
199 parameters be specified, otherwise serious network degradation will occur
200 during link failures.
204 Specifies the ARP monitoring frequency in milli-seconds.
205 If ARP monitoring is used in a load-balancing mode (mode 0 or 2), the
206 switch should be configured in a mode that evenly distributes packets
207 across all links - such as round-robin. If the switch is configured to
208 distribute the packets in an XOR fashion, all replies from the ARP
209 targets will be received on the same link which could cause the other
210 team members to fail. ARP monitoring should not be used in conjunction
211 with miimon. A value of 0 disables ARP monitoring. The default value
216 Specifies the ip addresses to use when arp_interval is > 0. These
217 are the targets of the ARP request sent to determine the health of
218 the link to the targets. Specify these values in ddd.ddd.ddd.ddd
219 format. Multiple ip adresses must be seperated by a comma. At least
220 one ip address needs to be given for ARP monitoring to work. The
221 maximum number of targets that can be specified is set at 16.
225 Specifies the delay time in milli-seconds to disable a link after a
226 link failure has been detected. This should be a multiple of miimon
227 value, otherwise the value will be rounded. The default value is 0.
231 Option specifying the rate in which we'll ask our link partner to
232 transmit LACPDU packets in 802.3ad mode. Possible values are:
235 Request partner to transmit LACPDUs every 30 seconds (default)
238 Request partner to transmit LACPDUs every 1 second
242 Specifies the number of bonding devices to create for this
243 instance of the bonding driver. E.g., if max_bonds is 3, and
244 the bonding driver is not already loaded, then bond0, bond1
245 and bond2 will be created. The default value is 1.
249 Specifies the frequency in milli-seconds that MII link monitoring
250 will occur. A value of zero disables MII link monitoring. A value
251 of 100 is a good starting point. See High Availability section for
252 additional information. The default value is 0.
256 Specifies one of the bonding policies. The default is
257 round-robin (balance-rr). Possible values are (you can use
258 either the text or numeric option):
262 Round-robin policy: Transmit in a sequential order
263 from the first available slave through the last. This
264 mode provides load balancing and fault tolerance.
268 Active-backup policy: Only one slave in the bond is
269 active. A different slave becomes active if, and only
270 if, the active slave fails. The bond's MAC address is
271 externally visible on only one port (network adapter)
272 to avoid confusing the switch. This mode provides
277 XOR policy: Transmit based on [(source MAC address
278 XOR'd with destination MAC address) modula slave
279 count]. This selects the same slave for each
280 destination MAC address. This mode provides load
281 balancing and fault tolerance.
285 Broadcast policy: transmits everything on all slave
286 interfaces. This mode provides fault tolerance.
290 IEEE 802.3ad Dynamic link aggregation. Creates aggregation
291 groups that share the same speed and duplex settings.
292 Transmits and receives on all slaves in the active
297 1. Ethtool support in the base drivers for retrieving the
298 speed and duplex of each slave.
300 2. A switch that supports IEEE 802.3ad Dynamic link
305 Adaptive transmit load balancing: channel bonding that does
306 not require any special switch support. The outgoing
307 traffic is distributed according to the current load
308 (computed relative to the speed) on each slave. Incoming
309 traffic is received by the current slave. If the receiving
310 slave fails, another slave takes over the MAC address of
311 the failed receiving slave.
315 Ethtool support in the base drivers for retrieving the
320 Adaptive load balancing: includes balance-tlb + receive
321 load balancing (rlb) for IPV4 traffic and does not require
322 any special switch support. The receive load balancing is
323 achieved by ARP negotiation. The bonding driver intercepts
324 the ARP Replies sent by the server on their way out and
325 overwrites the src hw address with the unique hw address of
326 one of the slaves in the bond such that different clients
327 use different hw addresses for the server.
329 Receive traffic from connections created by the server is
330 also balanced. When the server sends an ARP Request the
331 bonding driver copies and saves the client's IP information
332 from the ARP. When the ARP Reply arrives from the client,
333 its hw address is retrieved and the bonding driver
334 initiates an ARP reply to this client assigning it to one
335 of the slaves in the bond. A problematic outcome of using
336 ARP negotiation for balancing is that each time that an ARP
337 request is broadcasted it uses the hw address of the
338 bond. Hence, clients learn the hw address of the bond and
339 the balancing of receive traffic collapses to the current
340 salve. This is handled by sending updates (ARP Replies) to
341 all the clients with their assigned hw address such that
342 the traffic is redistributed. Receive traffic is also
343 redistributed when a new slave is added to the bond and
344 when an inactive slave is re-activated. The receive load is
345 distributed sequentially (round robin) among the group of
346 highest speed slaves in the bond.
348 When a link is reconnected or a new slave joins the bond
349 the receive traffic is redistributed among all active
350 slaves in the bond by intiating ARP Replies with the
351 selected mac address to each of the clients. The updelay
352 modeprobe parameter must be set to a value equal or greater
353 than the switch's forwarding delay so that the ARP Replies
354 sent to the clients will not be blocked by the switch.
358 1. Ethtool support in the base drivers for retrieving the
361 2. Base driver support for setting the hw address of a
362 device also when it is open. This is required so that there
363 will always be one slave in the team using the bond hw
364 address (the curr_active_slave) while having a unique hw
365 address for each slave in the bond. If the curr_active_slave
366 fails it's hw address is swapped with the new curr_active_slave
371 A string (eth0, eth2, etc) to equate to a primary device. If this
372 value is entered, and the device is on-line, it will be used first
373 as the output media. Only when this device is off-line, will
374 alternate devices be used. Otherwise, once a failover is detected
375 and a new default output is chosen, it will remain the output media
376 until it too fails. This is useful when one slave was preferred
377 over another, i.e. when one slave is 1000Mbps and another is
378 100Mbps. If the 1000Mbps slave fails and is later restored, it may
379 be preferred the faster slave gracefully become the active slave -
380 without deliberately failing the 100Mbps slave. Specifying a
381 primary is only valid in active-backup mode.
385 Specifies the delay time in milli-seconds to enable a link after a
386 link up status has been detected. This should be a multiple of miimon
387 value, otherwise the value will be rounded. The default value is 0.
391 Specifies whether or not miimon should use MII or ETHTOOL
392 ioctls vs. netif_carrier_ok() to determine the link status.
393 The MII or ETHTOOL ioctls are less efficient and utilize a
394 deprecated calling sequence within the kernel. The
395 netif_carrier_ok() relies on the device driver to maintain its
396 state with netif_carrier_on/off; at this writing, most, but
397 not all, device drivers support this facility.
399 If bonding insists that the link is up when it should not be,
400 it may be that your network device driver does not support
401 netif_carrier_on/off. This is because the default state for
402 netif_carrier is "carrier on." In this case, disabling
403 use_carrier will cause bonding to revert to the MII / ETHTOOL
404 ioctl method to determine the link state.
406 A value of 1 enables the use of netif_carrier_ok(), a value of
407 0 will use the deprecated MII / ETHTOOL ioctls. The default
411 Configuring Multiple Bonds
412 ==========================
414 If several bonding interfaces are required, either specify the max_bonds
415 parameter (described above), or load the driver multiple times. Using
416 the max_bonds parameter is less complicated, but has the limitation that
417 all bonding instances created will have the same options. Loading the
418 driver multiple times allows each instance of the driver to have differing
421 For example, to configure two bonding interfaces, one with mii link
422 monitoring performed every 100 milliseconds, and one with ARP link
423 monitoring performed every 200 milliseconds, the /etc/conf.modules should
424 resemble the following:
429 options bond0 miimon=100
430 options bond1 -o bonding1 arp_interval=200 arp_ip_target=10.0.0.1
432 Configuring Multiple ARP Targets
433 ================================
435 While ARP monitoring can be done with just one target, it can be useful
436 in a High Availability setup to have several targets to monitor. In the
437 case of just one target, the target itself may go down or have a problem
438 making it unresponsive to ARP requests. Having an additional target (or
439 several) increases the reliability of the ARP monitoring.
441 Multiple ARP targets must be seperated by commas as follows:
443 # example options for ARP monitoring with three targets
445 options bond0 arp_interval=60 arp_ip_target=192.168.0.1,192.168.0.3,192.168.0.9
447 For just a single target the options would resemble:
449 # example options for ARP monitoring with one target
451 options bond0 arp_interval=60 arp_ip_target=192.168.0.100
453 Potential Problems When Using ARP Monitor
454 =========================================
458 The ARP monitor relies on the network device driver to maintain two
459 statistics: the last receive time (dev->last_rx), and the last
460 transmit time (dev->trans_start). If the network device driver does
461 not update one or both of these, then the typical result will be that,
462 upon startup, all links in the bond will immediately be declared down,
463 and remain that way. A network monitoring tool (tcpdump, e.g.) will
464 show ARP requests and replies being sent and received on the bonding
467 The possible resolutions for this are to (a) fix the device driver, or
468 (b) discontinue the ARP monitor (using miimon as an alternative, for
471 2. Adventures in Routing
473 When bonding is set up with the ARP monitor, it is important that the
474 slave devices not have routes that supercede routes of the master (or,
475 generally, not have routes at all). For example, suppose the bonding
476 device bond0 has two slaves, eth0 and eth1, and the routing table is
479 Kernel IP routing table
480 Destination Gateway Genmask Flags MSS Window irtt Iface
481 10.0.0.0 0.0.0.0 255.255.0.0 U 40 0 0 eth0
482 10.0.0.0 0.0.0.0 255.255.0.0 U 40 0 0 eth1
483 10.0.0.0 0.0.0.0 255.255.0.0 U 40 0 0 bond0
484 127.0.0.0 0.0.0.0 255.0.0.0 U 40 0 0 lo
486 In this case, the ARP monitor (and ARP itself) may become confused,
487 because ARP requests will be sent on one interface (bond0), but the
488 corresponding reply will arrive on a different interface (eth0). This
489 reply looks to ARP as an unsolicited ARP reply (because ARP matches
490 replies on an interface basis), and is discarded. This will likely
491 still update the receive/transmit times in the driver, but will lose
494 The resolution here is simply to insure that slaves do not have routes
495 of their own, and if for some reason they must, those routes do not
496 supercede routes of their master. This should generally be the case,
497 but unusual configurations or errant manual or automatic static route
498 additions may cause trouble.
503 While the switch does not need to be configured when the active-backup,
504 balance-tlb or balance-alb policies (mode=1,5,6) are used, it does need to
505 be configured for the round-robin, XOR, broadcast, or 802.3ad policies
509 Verifying Bond Configuration
510 ============================
512 1) Bonding information files
513 ----------------------------
514 The bonding driver information files reside in the /proc/net/bonding directory.
516 Sample contents of /proc/net/bonding/bond0 after the driver is loaded with
517 parameters of mode=0 and miimon=1000 is shown below.
519 Bonding Mode: load balancing (round-robin)
520 Currently Active Slave: eth0
522 MII Polling Interval (ms): 1000
526 Slave Interface: eth1
528 Link Failure Count: 1
530 Slave Interface: eth0
532 Link Failure Count: 1
534 2) Network verification
535 -----------------------
536 The network configuration can be verified using the ifconfig command. In
537 the example below, the bond0 interface is the master (MASTER) while eth0 and
538 eth1 are slaves (SLAVE). Notice all slaves of bond0 have the same MAC address
539 (HWaddr) as bond0 for all modes except TLB and ALB that require a unique MAC
540 address for each slave.
542 [root]# /sbin/ifconfig
543 bond0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
544 inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0
545 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
546 RX packets:7224794 errors:0 dropped:0 overruns:0 frame:0
547 TX packets:3286647 errors:1 dropped:0 overruns:1 carrier:0
548 collisions:0 txqueuelen:0
550 eth0 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
551 inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0
552 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
553 RX packets:3573025 errors:0 dropped:0 overruns:0 frame:0
554 TX packets:1643167 errors:1 dropped:0 overruns:1 carrier:0
555 collisions:0 txqueuelen:100
556 Interrupt:10 Base address:0x1080
558 eth1 Link encap:Ethernet HWaddr 00:C0:F0:1F:37:B4
559 inet addr:XXX.XXX.XXX.YYY Bcast:XXX.XXX.XXX.255 Mask:255.255.252.0
560 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
561 RX packets:3651769 errors:0 dropped:0 overruns:0 frame:0
562 TX packets:1643480 errors:0 dropped:0 overruns:0 carrier:0
563 collisions:0 txqueuelen:100
564 Interrupt:9 Base address:0x1400
567 Frequently Asked Questions
568 ==========================
572 Yes. The old 2.0.xx channel bonding patch was not SMP safe.
573 The new driver was designed to be SMP safe from the start.
575 2. What type of cards will work with it?
577 Any Ethernet type cards (you can even mix cards - a Intel
578 EtherExpress PRO/100 and a 3com 3c905b, for example).
579 You can even bond together Gigabit Ethernet cards!
581 3. How many bonding devices can I have?
585 4. How many slaves can a bonding device have?
587 Limited by the number of network interfaces Linux supports and/or the
588 number of network cards you can place in your system.
590 5. What happens when a slave link dies?
592 If your ethernet cards support MII or ETHTOOL link status monitoring
593 and the MII monitoring has been enabled in the driver (see description
594 of module parameters), there will be no adverse consequences. This
595 release of the bonding driver knows how to get the MII information and
596 enables or disables its slaves according to their link status.
597 See section on High Availability for additional information.
599 For ethernet cards not supporting MII status, the arp_interval and
600 arp_ip_target parameters must be specified for bonding to work
601 correctly. If packets have not been sent or received during the
602 specified arp_interval duration, an ARP request is sent to the
603 targets to generate send and receive traffic. If after this
604 interval, either the successful send and/or receive count has not
605 incremented, the next slave in the sequence will become the active
608 If neither mii_monitor and arp_interval is configured, the bonding
609 driver will not handle this situation very well. The driver will
610 continue to send packets but some packets will be lost. Retransmits
611 will cause serious degradation of performance (in the case when one
612 of two slave links fails, 50% packets will be lost, which is a serious
613 problem for both TCP and UDP).
615 6. Can bonding be used for High Availability?
617 Yes, if you use MII monitoring and ALL your cards support MII link
618 status reporting. See section on High Availability for more
621 7. Which switches/systems does it work with?
623 In round-robin and XOR mode, it works with systems that support
626 * Many Cisco switches and routers (look for EtherChannel support).
627 * SunTrunking software.
628 * Alteon AceDirector switches / WebOS (use Trunks).
629 * BayStack Switches (trunks must be explicitly configured). Stackable
630 models (450) can define trunks between ports on different physical
632 * Linux bonding, of course !
634 In 802.3ad mode, it works with with systems that support IEEE 802.3ad
635 Dynamic Link Aggregation:
637 * Extreme networks Summit 7i (look for link-aggregation).
638 * Many Cisco switches and routers (look for LACP support; this may
639 require an upgrade to your IOS software; LACP support was added
640 by Cisco in late 2002).
641 * Foundry Big Iron 4000
643 In active-backup, balance-tlb and balance-alb modes, it should work
644 with any Layer-II switch.
647 8. Where does a bonding device get its MAC address from?
649 If not explicitly configured with ifconfig, the MAC address of the
650 bonding device is taken from its first slave device. This MAC address
651 is then passed to all following slaves and remains persistent (even if
652 the the first slave is removed) until the bonding device is brought
653 down or reconfigured.
655 If you wish to change the MAC address, you can set it with ifconfig:
657 # ifconfig bond0 hw ether 00:11:22:33:44:55
659 The MAC address can be also changed by bringing down/up the device
660 and then changing its slaves (or their order):
662 # ifconfig bond0 down ; modprobe -r bonding
663 # ifconfig bond0 .... up
664 # ifenslave bond0 eth...
666 This method will automatically take the address from the next slave
669 To restore your slaves' MAC addresses, you need to detach them
670 from the bond (`ifenslave -d bond0 eth0'). The bonding driver will then
671 restore the MAC addresses that the slaves had before they were enslaved.
673 9. Which transmit polices can be used?
675 Round-robin, based on the order of enslaving, the output device
676 is selected base on the next available slave. Regardless of
677 the source and/or destination of the packet.
679 Active-backup policy that ensures that one and only one device will
680 transmit at any given moment. Active-backup policy is useful for
681 implementing high availability solutions using two hubs (see
682 section on High Availability).
684 XOR, based on (src hw addr XOR dst hw addr) % slave count. This
685 policy selects the same slave for each destination hw address.
687 Broadcast policy transmits everything on all slave interfaces.
689 802.3ad, based on XOR but distributes traffic among all interfaces
690 in the active aggregator.
692 Transmit load balancing (balance-tlb) balances the traffic
693 according to the current load on each slave. The balancing is
694 clients based and the least loaded slave is selected for each new
695 client. The load of each slave is calculated relative to its speed
696 and enables load balancing in mixed speed teams.
698 Adaptive load balancing (balance-alb) uses the Transmit load
699 balancing for the transmit load. The receive load is balanced only
700 among the group of highest speed active slaves in the bond. The
701 load is distributed with round-robin i.e. next available slave in
702 the high speed group of active slaves.
707 To implement high availability using the bonding driver, the driver needs to be
708 compiled as a module, because currently it is the only way to pass parameters
709 to the driver. This may change in the future.
711 High availability is achieved by using MII or ETHTOOL status reporting. You
712 need to verify that all your interfaces support MII or ETHTOOL link status
713 reporting. On Linux kernel 2.2.17, all the 100 Mbps capable drivers and
714 yellowfin gigabit driver support MII. To determine if ETHTOOL link reporting
715 is available for interface eth0, type "ethtool eth0" and the "Link detected:"
716 line should contain the correct link status. If your system has an interface
717 that does not support MII or ETHTOOL status reporting, a failure of its link
718 will not be detected! A message indicating MII and ETHTOOL is not supported by
719 a network driver is logged when the bonding driver is loaded with a non-zero
722 The bonding driver can regularly check all its slaves links using the ETHTOOL
723 IOCTL (ETHTOOL_GLINK command) or by checking the MII status registers. The
724 check interval is specified by the module argument "miimon" (MII monitoring).
725 It takes an integer that represents the checking time in milliseconds. It
726 should not come to close to (1000/HZ) (10 milli-seconds on i386) because it
727 may then reduce the system interactivity. A value of 100 seems to be a good
728 starting point. It means that a dead link will be detected at most 100
729 milli-seconds after it goes down.
733 # modprobe bonding miimon=100
735 Or, put the following line in /etc/modprobe.conf:
737 options bond0 miimon=100
739 There are currently two policies for high availability. They are dependent on
742 a) hosts are connected to a single host or switch that support trunking
744 b) hosts are connected to several different switches or a single switch that
745 does not support trunking
748 1) High Availability on a single switch or host - load balancing
749 ----------------------------------------------------------------
750 It is the easiest to set up and to understand. Simply configure the
751 remote equipment (host or switch) to aggregate traffic over several
752 ports (Trunk, EtherChannel, etc.) and configure the bonding interfaces.
753 If the module has been loaded with the proper MII option, it will work
754 automatically. You can then try to remove and restore different links
755 and see in your logs what the driver detects. When testing, you may
756 encounter problems on some buggy switches that disable the trunk for a
757 long time if all ports in a trunk go down. This is not Linux, but really
758 the switch (reboot it to ensure).
760 Example 1 : host to host at twice the speed
762 +----------+ +----------+
764 | Host A +--------------------------+ Host B |
765 | +--------------------------+ |
767 +----------+ +----------+
770 # modprobe bonding miimon=100
771 # ifconfig bond0 addr
772 # ifenslave bond0 eth0 eth1
774 Example 2 : host to switch at twice the speed
776 +----------+ +----------+
778 | Host A +--------------------------+ switch |
779 | +--------------------------+ |
781 +----------+ +----------+
783 On host A : On the switch :
784 # modprobe bonding miimon=100 # set up a trunk on port1
785 # ifconfig bond0 addr and port2
786 # ifenslave bond0 eth0 eth1
789 2) High Availability on two or more switches (or a single switch without
791 ---------------------------------------------------------------------------
792 This mode is more problematic because it relies on the fact that there
793 are multiple ports and the host's MAC address should be visible on one
794 port only to avoid confusing the switches.
796 If you need to know which interface is the active one, and which ones are
797 backup, use ifconfig. All backup interfaces have the NOARP flag set.
799 To use this mode, pass "mode=1" to the module at load time :
801 # modprobe bonding miimon=100 mode=active-backup
805 # modprobe bonding miimon=100 mode=1
807 Or, put in your /etc/modprobe.conf :
809 options bond0 miimon=100 mode=active-backup
811 Example 1: Using multiple host and multiple switches to build a "no single
812 point of failure" solution.
817 +-----+----+ +-----+----+
818 | |port7 ISL port7| |
819 | switch A +--------------------------+ switch B |
820 | +--------------------------+ |
822 +----++----+ +-----++---+
823 port2||port1 port1||port2
825 |+-------------+ host1 +---------------+|
826 | eth0 +-------+ eth1 |
829 +--------------+ host2 +----------------+
832 In this configuration, there is an ISL - Inter Switch Link (could be a trunk),
833 several servers (host1, host2 ...) attached to both switches each, and one or
834 more ports to the outside world (port3...). One and only one slave on each host
835 is active at a time, while all links are still monitored (the system can
836 detect a failure of active and backup links).
838 Each time a host changes its active interface, it sticks to the new one until
839 it goes down. In this example, the hosts are negligibly affected by the
840 expiration time of the switches' forwarding tables.
842 If host1 and host2 have the same functionality and are used in load balancing
843 by another external mechanism, it is good to have host1's active interface
844 connected to one switch and host2's to the other. Such system will survive
845 a failure of a single host, cable, or switch. The worst thing that may happen
846 in the case of a switch failure is that half of the hosts will be temporarily
847 unreachable until the other switch expires its tables.
849 Example 2: Using multiple ethernet cards connected to a switch to configure
850 NIC failover (switch is not required to support trunking).
853 +----------+ +----------+
855 | Host A +--------------------------+ switch |
856 | +--------------------------+ |
858 +----------+ +----------+
860 On host A : On the switch :
861 # modprobe bonding miimon=100 mode=1 # (optional) minimize the time
862 # ifconfig bond0 addr # for table expiration
863 # ifenslave bond0 eth0 eth1
865 Each time the host changes its active interface, it sticks to the new one until
866 it goes down. In this example, the host is strongly affected by the expiration
867 time of the switch forwarding table.
870 3) Adapting to your switches' timing
871 ------------------------------------
872 If your switches take a long time to go into backup mode, it may be
873 desirable not to activate a backup interface immediately after a link goes
874 down. It is possible to delay the moment at which a link will be
875 completely disabled by passing the module parameter "downdelay" (in
876 milliseconds, must be a multiple of miimon).
878 When a switch reboots, it is possible that its ports report "link up" status
879 before they become usable. This could fool a bond device by causing it to
880 use some ports that are not ready yet. It is possible to delay the moment at
881 which an active link will be reused by passing the module parameter "updelay"
882 (in milliseconds, must be a multiple of miimon).
884 A similar situation can occur when a host re-negotiates a lost link with the
885 switch (a case of cable replacement).
887 A special case is when a bonding interface has lost all slave links. Then the
888 driver will immediately reuse the first link that goes up, even if updelay
889 parameter was specified. (If there are slave interfaces in the "updelay" state,
890 the interface that first went into that state will be immediately reused.) This
891 allows to reduce down-time if the value of updelay has been overestimated.
895 # modprobe bonding miimon=100 mode=1 downdelay=2000 updelay=5000
896 # modprobe bonding miimon=100 mode=balance-rr downdelay=0 updelay=5000
899 Promiscuous Sniffing notes
900 ==========================
902 If you wish to bond channels together for a network sniffing
903 application --- you wish to run tcpdump, or ethereal, or an IDS like
904 snort, with its input aggregated from multiple interfaces using the
905 bonding driver --- then you need to handle the Promiscuous interface
906 setting by hand. Specifically, when you "ifconfing bond0 up" you
907 must add the promisc flag there; it will be propagated down to the
908 slave interfaces at ifenslave time; a full example might look like:
910 ifconfig bond0 promisc up
911 for if in eth1 eth2 ...;do
915 snort ... -i bond0 ...
917 Ifenslave also wants to propagate addresses from interface to
918 interface, appropriately for its design functions in HA and channel
919 capacity aggregating; but it works fine for unnumbered interfaces;
920 just ignore all the warnings it emits.
926 It is possible to configure VLAN devices over a bond interface using the 8021q
927 driver. However, only packets coming from the 8021q driver and passing through
928 bonding will be tagged by default. Self generated packets, like bonding's
929 learning packets or ARP packets generated by either ALB mode or the ARP
930 monitor mechanism, are tagged internally by bonding itself. As a result,
931 bonding has to "learn" what VLAN IDs are configured on top of it, and it uses
932 those IDs to tag self generated packets.
934 For simplicity reasons, and to support the use of adapters that can do VLAN
935 hardware acceleration offloding, the bonding interface declares itself as
936 fully hardware offloaing capable, it gets the add_vid/kill_vid notifications
937 to gather the necessary information, and it propagates those actions to the
939 In case of mixed adapter types, hardware accelerated tagged packets that should
940 go through an adapter that is not offloading capable are "un-accelerated" by the
941 bonding driver so the VLAN tag sits in the regular location.
943 VLAN interfaces *must* be added on top of a bonding interface only after
944 enslaving at least one slave. This is because until the first slave is added the
945 bonding interface has a HW address of 00:00:00:00:00:00, which will be copied by
946 the VLAN interface when it is created.
948 Notice that a problem would occur if all slaves are released from a bond that
949 still has VLAN interfaces on top of it. When later coming to add new slaves, the
950 bonding interface would get a HW address from the first slave, which might not
951 match that of the VLAN interfaces. It is recommended that either all VLANs are
952 removed and then re-added, or to manually set the bonding interface's HW
953 address so it matches the VLAN's. (Note: changing a VLAN interface's HW address
954 would set the underlying device -- i.e. the bonding interface -- to promiscouos
955 mode, which might not be what you want).
960 The main limitations are :
961 - only the link status is monitored. If the switch on the other side is
962 partially down (e.g. doesn't forward anymore, but the link is OK), the link
963 won't be disabled. Another way to check for a dead link could be to count
964 incoming frames on a heavily loaded host. This is not applicable to small
965 servers, but may be useful when the front switches send multicast
966 information on their links (e.g. VRRP), or even health-check the servers.
967 Use the arp_interval/arp_ip_target parameters to count incoming/outgoing
975 Current development on this driver is posted to:
976 - http://www.sourceforge.net/projects/bonding/
978 Donald Becker's Ethernet Drivers and diag programs may be found at :
979 - http://www.scyld.com/network/
981 You will also find a lot of information regarding Ethernet, NWay, MII, etc. at
984 Patches for 2.2 kernels are at Willy Tarreau's site :
985 - http://wtarreau.free.fr/pub/bonding/
986 - http://www-miaif.lip6.fr/~tarreau/pub/bonding/
988 To get latest informations about Linux Kernel development, please consult
989 the Linux Kernel Mailing List Archives at :
990 http://www.ussg.iu.edu/hypermail/linux/kernel/