GFS2 cluster resources
Global File System 2 (GFS2) file systems should not be mounted automatically at boot through the configuration of /etc/fstab. Instead, a Pacemaker resource group should be set up that automatically mounts the GFS2 resources on appropriate nodes when the supporting cluster infrastructure starts up.
Using a Pacemaker cluster file system resource to manage GFS2 file systems allows Pacemaker to manage and control the GFS2 file system resource in the same way as other resources. It also ensures that the file system is mounted and unmounted correctly and cleanly.
This section will outline the steps required to set up a cluster that includes a GFS2 file system.
Creating a GFS2 file system
The following procedure reviews the steps necessary to create a GFS2 file system on a CLVM logical volume and to properly prepare the cluster for the Pacemaker cluster resource that will manage it.
1. Set the global Pacemaker parameter no-quorum-policy to freeze.
# pcs property set no-quorum-policy=freeze
By default, the property no-quorum-,policy is set to stop, which will immediately stop all resources in the cluster if quorum is lost. This is normally the best option. However, if the property is set to stop, the mounted GFS2 file systems and applications using them cannot use the cluster infrastructure to correctly stop.
In this case, any attempts to stop these resources without quorum will fail, which will ultimately result in the entire cluster being fenced every time quorum is lost. By setting no-quorum-policy=freeze, when quorum is lost the cluster nodes will do nothing until quorum is regained.
2.Set up a DLM resource. This is a required dependency for clvmd and GFS2 to manage cluster locking.
# pcs resource create dlm ocf:pacemaker:controld op monitor \ > interval=30s on-fail=fence clone interleave=true ordered=true
3. Execute the following command in each node of the cluster to enable clustered locking. This command sets the locking_type parameter in the /etc/lvm/lvm.conf file to 3.
# /sbin/lvmconf --enable-cluster
4. Set up clvmd as a cluster resource.
# pcs resource create clvmd ocf:heartbeat:clvm op monitor \ > interval=30s on-fail=fence clone interleave=true ordered=true
5. Set up clvmd and OLM dependency and startup order. clvmd must start after OLM and must run on the same node as DLM.
# pcs constraint order start dlm-clone then clvmd-clone # pcs constraint colocation add clvmd-clone with dlm-clone
6. Create the clustered LV and format the volume with a GFS2 file system. Ensure that enough journals are created for each of the nodes in the cluster.
# pvcreate /dev/sdb # vgcreate -Ay -cy cluster_vg /dev/vdb # lvcreate -L5G -n cluster_lv cluster_vg # mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv
Configuring a GFS2 Pacemaker cluster resource
Once the supporting cluster infrastructure and a CLVM logical volume formatted with a GFS2 file system is available, the Pacemaker cluster file system resource can be configured.
The Filesystem resource is used to configure GFS2 resources. Detailed information about the parameters this resource takes is available by running the command pcs resource describe Filesystem. Key information needed will be the CLVM device that is formatted with the file system, and the directory that is the planned mount point for the file system.
Whenever it is desirable to have a copy of a resource to run on each node, resource clones can be used. Whenever a resource is cloned, a copy of that resource will be started on every node in the cluster. Cloning the GFS2 file system resource enables the resource to run on every node in the cluster and mount the file system, provided a journal was created for every node.
Mount options can be specified as part of the resource configuration with options=options. Two mount options that can have a significant effect on the performance of GFS2 are noatime and relatime.
If atime updates are enabled, as they are by default on GFS2, every time a file is read, its time stamp recording when the file was last accessed needs to be updated. The file’s access timestamp is used by a limited number of applications. These a time updates can require a significant amount of write and file-locking traffic, which can degrade GFS2 performance.
The relatime (relative atime) Linux mount option specifies that limited updates to the access time of files are allowed. The atime is only updated if the previous atime update is older than the mtime (modification time) or ctime (last change time) of the file.
1. Create the Pacemaker GFS2 cluster resource. This cluster resource creation command also specifies the noatime mount option to improve performance.
# pcs resource create clusterfs Filesystem \ > device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint" \ > fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence \ > clone interleave=true
2. Create resource constraints to set GFS2 and clvmd dependency and startup order. GFS2 must start after clvmd and must run on the same node as clvmd.
# pcs constraint order start clvmd-clone then clusterfs-clone # pcs constraint colocation add clusterfs-clone with clvmd-clone
3. Verify that the GFS2 file system is mounted as expected.
# mount | grep /mnt/gfs2-demo /dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)
How to Create a GFS2 Formatted Cluster File System
Managing a GFS2 File System – Adding journals, extending and repairing GFS2