Overview of fence device configuration
Fencing is a requirement for every operational cluster. The first step when setting up fencing for the cluster is the setup of the hardware or software device that does the actual fencing.
The Red Hat Enterprise Linux High Availability Add-on provides a variety of fencing agents for use with different fence devices. The pcs stonith list provides a list of all installed fencing agents:
# pcs stonith list fence_apc - Fence agent for APC over telnet/ssh fence_apc_snmp - Fence agent for APC over SNMP fence_bladecenter - Fence agent for IBM Bladecenter ...
Depending on the fence device and fencing agent in use, different parameters are required. The parameters are passed by the fencing agent to the fencing device. Communication between the fencing agent and the fencing device is established and fencing of a cluster node is successful only if the required set of parameters is passed. A man page is available in the system for every shipped fencing agent; the man page describes the parameters that can be passed to the fencing device. A list of possible and required parameters for a specific fencing agent can also be found by executing the command pcs stonith describe [FENCINGAGENT].
# pcs stonith describe fence_rhevm
Examples of fence device configuration
Different fencing devices require different hardware and software configurations. The hardware needs to be set up and configured. The software required to be configured and various configuration parameters need to be documented for later use with the fencing agent.
APC network power switch fencing
One way to configure power fencing is to use an APC network power switch. The hardware setup includes power cabling of the cluster nodes with the APC network power switch. Fencing with an APC network power switch requires the fencing agent to log into the power switch to control the power outlet of a specific node. For setting up fencing with an APC fence device, it is important to document at least the following switch settings for later use with the fencing agent:
- IP address of the APC fence device.
- Username and password to access the APC fence device.
- If the device can be accessed by SSH or telnet.
- The plug ID(s) for each cluster node must be known.
Management hardware fencing
Management hardware, such as ILO, DRAC, or IPMI hardware, can power down, power up, and power cycle systems. At a minimum, the following parameters need to be configured and known to the cluster administrator to use management cards as fencing devices:
- IP address of the management device.
- Username and password to access the management fence device.
- Which machines are handled by the management fence device.
SCSI fencing does not require any physical hardware dedicated to fencing. The cluster administrator needs to know which device has to be blocked from cluster node access with SCSI reservation.
Virtual machine fencing
The Red Hat High Availability Add-On ships with a number of different fencing agents for different hypervisors. With the exception of fence_virt, the fencing agent for libvirt, they require the same kind of parameters:
- IP or host name of the hypervisor.
- Username and password to access the hypervisor.
- The virtual machine name for each node.
Cluster nodes that are virtual machines running on a Red Hat Enterprise Linux host with KVM/libvirt require the software fencing device fence-virtd configured and running on the hypervisor. Virtual machine fencing in multicast mode works by sending a fencing request signed with a shared secret key to the libvirt fencing multicast group. This means that the actual node virtual machines can be run on different hypervisor machines, as long as all hypervisors have fence-virtd configured for the same multicast group, and using the same shared secret. To set up the fence-virtd software fence device·on the hypervisor running the virtual machines, the following steps are required:
1. On the hypervisor, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.
# yµm -y install fence-virtd fence-virtd-libvirt fence-virtdmulticast
2. On the hypervisor, create a shared secret key called /etc/cluster/fence_xvm.key. Thetarget directory /etc/cluster needs to be created manually.
# mkdir -p /etc/cluster # dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4
3. On the hypervisor, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener.
# fence_virtd -c
4. Enable and start the fence_virtd daemon on the hypervisor.
# systemctl enable fence_virtd # systemctl start fence_virtd
5. Copy the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the hypervisor.
Testing fence devices
Fencing is crucial for an operational cluster. It is mandatory for a cluster administrator to test the fencing setup thoroughly. The fencing device setup can be tested by calling fencing agents from the command line. All fencing agents reside in /usr/sbin/fence_*. The fencing agents typically take a -h option to show all available options, or pcs stonith describe fence_agent can be used to investigate possible options. The options required to test fencing differ from agent to agent.
When testing fencing with an APC power switch, it can be helpful to plug in a lamp for testing instead of the actual cluster node. This makes it visible if controlling the power works as expected.
Calling a fencing agent directly for testing fencing will verify if the actual fencing device itself is working properly, but not if fencing is correctly configured in the cluster.
Configuring Cluster Fencing Agents in a Pacemaker Cluster