Clustered LVM
Clustered LVM allows the use of regular LVM volume groups and logical volumes on shared storage. In a cluster configured with clustered LVM, a volume group and its logical volumes are accessible to all cluster nodes at the same time. With clustered LVM, administrators can use the management benefits of LVM in conjunction with a shared file system like GFS2, for scenarios such as making virtual machine images inside logical volumes available to all cluster nodes.
The active/active configuration of logical volumes in a cluster using clustered LVM is accomplished by using a daemon called clvmd to propagate metadata changes to all cluster nodes. The clvmd daemon manages clustered volume groups and communicates their metadata changes made on one cluster node to all the remaining nodes in the cluster.
Without the clvmd daemon, LVM metadata changes made on one cluster node would be unknown to other cluster nodes. Since these metadata define which storage addresses are available for data and file system information, metadata changes not propagated to all cluster nodes can lead to corruption of LVM metadata as well as data residing on LVM physical volumes.
In order to prevent multiple nodes from changing LVM metadata simultaneously, clustered LVM uses Distributed Lock Manager (DLM) for lock management. The clvmd daemon and the DLM lock manager must be installed prior to configuring clustered LVM. They can be obtained by installing the lvm2-cluster and dlm packages, respectively. Both packages are available from the Resilient Storage repository.
Configuring clustered LVM
To configure clustered LVM, all cluster nodes must be configured to use LVM’s built-in clustered locking. This can be done by manually editing /etc/lvm/lvm.conf on each node and setting the locking_type option to 3.
# vi /etc/lvm/lvm.conf locking_type = 3
To improve performance and automatic activation of volume groups and logical volumes by udev, LVM makes use of a metadata cache. By default, LVM manages its metadata centrally using a daemon, lvmetad. While the use of lvmetad is supported when LVM is configured for local file-based locking (locking_type = 1), it is not currently supported for use across cluster nodes. Therefore, when LVM is configured for clustered locking, lvmetad must also be disabled on each node with the following setting in /etc/lvm/lvm.conf.
# vi /etc/lvm/lvm.conf use_lvmetad = 0
When implementing clustered LVM, the preceding LVM configuration changes can be performed manually by editing the /etc/lvm/lvm.conf configuration file. Alternatively, both configuration changes can be effected by running the following command.
# lvmconf --enable-cluster
After the use of the lvmetad metadata cache is disabled, the lvmetad service can be disabled on the cluster as well.
# systemctl stop lvm2-lvmetad
Once LVM has been configured for clustered locking, create DLM and clvmd resources in the cluster using the controld and clvm resource agents, respectively. Since both resources need to run on every node in the cluster, it is necessary to create them as cloned resources.
# pcs resource create mydlm controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
# pcs resource create myclvmd clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
The clvmd resource depends on the DLM resource. An order constraint can be used to enforce the startup of the DLM resource prior to the clvmd resource. In addition, a colocation constraint should be specified so that both resources run together on the same node.
# pcs constraint order start mydlm-clone then myclvmd-clone
# pcs constraint colocation add myclvmd-clone with mydlm-clone
Adding logical volumes with CLVM
After LVM clustered locking, DLM, and clvmd cluster resources have been configured, logical volumes can be created with CLVM using standard LVM commands. With CLVM configured, any changes made to a clustered volume group will be propagated to all cluster nodes. Volume groups created will automatically be marked as clustered, which indicates that they are shared with other nodes in the cluster.