• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

The Geek Diary

CONCEPTS | BASICS | HOWTO

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • Linux Services
    • VCS
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
    • Data Guard
  • DevOps
    • Docker
    • Shell Scripting
  • Interview Questions
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

Setting Up Fencing Devices in a Pacemaker Cluster

By admin

Overview of fence device configuration

Fencing is a requirement for every operational cluster. The first step when setting up fencing for the cluster is the setup of the hardware or software device that does the actual fencing.

The Red Hat Enterprise Linux High Availability Add-on provides a variety of fencing agents for use with different fence devices. The pcs stonith list provides a list of all installed fencing agents:

# pcs stonith list 
fence_apc - Fence agent for APC over telnet/ssh 
fence_apc_snmp - Fence agent for APC over SNMP 
fence_bladecenter - Fence agent for IBM Bladecenter
...

Depending on the fence device and fencing agent in use, different parameters are required. The parameters are passed by the fencing agent to the fencing device. Communication between the fencing agent and the fencing device is established and fencing of a cluster node is successful only if the required set of parameters is passed. A man page is available in the system for every shipped fencing agent; the man page describes the parameters that can be passed to the fencing device. A list of possible and required parameters for a specific fencing agent can also be found by executing the command pcs stonith describe [FENCINGAGENT].

# pcs stonith describe fence_rhevm

Examples of fence device configuration

Different fencing devices require different hardware and software configurations. The hardware needs to be set up and configured. The software required to be configured and various configuration parameters need to be documented for later use with the fencing agent.

APC network power switch fencing

One way to configure power fencing is to use an APC network power switch. The hardware setup includes power cabling of the cluster nodes with the APC network power switch. Fencing with an APC network power switch requires the fencing agent to log into the power switch to control the power outlet of a specific node. For setting up fencing with an APC fence device, it is important to document at least the following switch settings for later use with the fencing agent:

  • IP address of the APC fence device.
  • Username and password to access the APC fence device.
  • If the device can be accessed by SSH or telnet.
  • The plug ID(s) for each cluster node must be known.

Management hardware fencing

Management hardware, such as ILO, DRAC, or IPMI hardware, can power down, power up, and power cycle systems. At a minimum, the following parameters need to be configured and known to the cluster administrator to use management cards as fencing devices:

  • IP address of the management device.
  • Username and password to access the management fence device.
  • Which machines are handled by the management fence device.

SCSI fencing

SCSI fencing does not require any physical hardware dedicated to fencing. The cluster administrator needs to know which device has to be blocked from cluster node access with SCSI reservation.

Virtual machine fencing

The Red Hat High Availability Add-On ships with a number of different fencing agents for different hypervisors. With the exception of fence_virt, the fencing agent for libvirt, they require the same kind of parameters:

  • IP or host name of the hypervisor.
  • Username and password to access the hypervisor.
  • The virtual machine name for each node.

Libvirt fencing

Cluster nodes that are virtual machines running on a Red Hat Enterprise Linux host with KVM/libvirt require the software fencing device fence-virtd configured and running on the hypervisor. Virtual machine fencing in multicast mode works by sending a fencing request signed with a shared secret key to the libvirt fencing multicast group. This means that the actual node virtual machines can be run on different hypervisor machines, as long as all hypervisors have fence-virtd configured for the same multicast group, and using the same shared secret. To set up the fence-virtd software fence device·on the hypervisor running the virtual machines, the following steps are required:

1. On the hypervisor, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

# yµm -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast 

2. On the hypervisor, create a shared secret key called /etc/cluster/fence_xvm.key. Thetarget directory /etc/cluster needs to be created manually.

# mkdir -p /etc/cluster 
# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

3. On the hypervisor, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener.

# fence_virtd -c

4. Enable and start the fence_virtd daemon on the hypervisor.

# systemctl enable fence_virtd 
# systemctl start fence_virtd

5. Copy the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the hypervisor.

Testing fence devices

Fencing is crucial for an operational cluster. It is mandatory for a cluster administrator to test the fencing setup thoroughly. The fencing device setup can be tested by calling fencing agents from the command line. All fencing agents reside in /usr/sbin/fence_*. The fencing agents typically take a -h option to show all available options, or pcs stonith describe fence_agent can be used to investigate possible options. The options required to test fencing differ from agent to agent.

Conclusion

When testing fencing with an APC power switch, it can be helpful to plug in a lamp for testing instead of the actual cluster node. This makes it visible if controlling the power works as expected.

Calling a fencing agent directly for testing fencing will verify if the actual fencing device itself is working properly, but not if fencing is correctly configured in the cluster.

What is fencing and What are different methods of fencing in a pacemaker cluster
Configuring Cluster Fencing Agents in a Pacemaker Cluster
Beginner Guide to RHEL 7 high-availability cluster – Architectural Overview

Filed Under: pacemaker

Some more articles you might also be interested in …

  1. Managing resources in a pacemaker cluster (stop/start and relocate resource groups in a running cluster)
  2. Managing Resource Startup Order in Pacemaker Cluster (Managing Constraints)
  3. Configuring Cluster Fencing Agents in a Pacemaker Cluster
  4. Most Common Two-node Pacemaker cluster issues and their workarounds
  5. Beginners Guide to Global File System 2 (GFS2)
  6. What is fencing and What are different methods of fencing in a pacemaker cluster
  7. How to Configure Two-node Pacemaker Cluster
  8. How to Configure Multiple Fencing-device Levels in Pacemaker Cluster
  9. Managing Clustered Logical Volumes in RHEL Cluster (pacemaker)
  10. How to Configure Cluster Logging for Corosync and Pacemaker

You May Also Like

Primary Sidebar

Recent Posts

  • What are different Oracle Database Vault Roles
  • Unable to export realm protected table using data pump
  • Beginners Guide to Oracle Database Vault
  • How to Disable IPv6 on Ubuntu 18.04 Bionic Beaver Linux
  • Archives
  • Contact Us
  • Copyright

© 2021 · The Geek Diary