Global File System 2 (GFS2)
Global File System 2 (GFS2) is a cluster file system interfacing directly with the kernel VFS layer. This means that the same file system can be mounted and used by multiple cluster nodes simultaneously, while still providing a full regular file system, including features such as support for POSIX ACLs, extended attributes, and quotas.
To accomplish this, every node accessing a GFS2 file system uses the cluster infrastructure provided by Corosync and Pacemaker to provide services such as fencing and locking. Each cluster node mounting a GFS2 file system will use a separate journal. If a node fails, one of the other nodes in the cluster will replay the journal for the failed node after the failed node has been fenced. To prevent race conditions between two nodes when accessing the file system, GFS2 uses the Distributed Lock Manager (DLM) to coordinate locks on files and directories.
Global File System 2 (GFS2) preparations
Before creating a GFS2 file system:
- Make sure that the gfs2-utils package is installed on all of the cluster nodes.
- Make sure that there is a clustered logical volume, accessible from all cluster nodes, on which to create the GFS2 file system.
- Make sure that the clocks between all cluster nodes are synchronized (preferably with ntp or chronyd).
Furthermore, a few pieces of information are needed when creating a GFS2 file system:
Creating a GFS2 file system
Once all the prerequisites are in place, use the mkfs. gfs2 command from any one of the nodes to create a GFS2 file system. The most common options to mkfs.gfs2 are listed in the following table.
mkfs.gfs2 options | Description |
---|---|
-t [lock_table_name] | The name of the locking table (not used with lock_nolock). For lock_dlm this should be [cfustername]:[fs_name]. Only nodes that are a member of [clustername] will be allowed to mount this file system. [fs_name] should be a unique name to distinguish this file system between one and sixteen characters in length. |
-j [number_of_journals] | The number of journals to create initially (more journals can be added later). Each node accessing a file system simultaneously will need one journal. This option will default to one journal if omitted. |
-J [journal_size] | The size of the journals to be created in MiB. Journals will need to be at least 8 MiB. The journal size will default to 128 MiB if no size is given. |
For example, to create a GFS2 file system called examplegfs2, belonging to the examplecluster.cluster, with three 64 MiB journals, on the /dev/clusteredvg/lv_gfs clustered logical volume, use the following command:
# mkfs.gfs2 -t examplecluster:examplegfs2 -j 3 -J 64 /dev/clusteredvg/lv_gfs
When running the mkfs.gfs2 command to create a GFS2 file system, the size of the journals may be specified. If a size is not specified, it will default to 128 MB, which should be optimal for most applications. It is generally recommended to use the default journal size of
Mounting a GFS2 file system
Before mounting a GFS2 file system, the file system must exist, the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. For testing purposes, the GFS2 file system can be mounted in the same way as any other typical Linux file system. For normal production operation, the GFS2 file system should be mounted by configuring it as a cluster resource.
GFS2 file systems that have been mounted manually rather than automatically through Pacemaker will not be known to the system when file systems are unmounted at system shutdown. As a result, the GFS2 script will not unmount the GFS2 file system. After the GFS2 shutdown script is run, the standard shutdown process kills off all remaining user processes, including the cluster infrastructure, and tries to unmount the file system. This unmount will fail without the cluster infrastructure and the system will hang.
To prevent the system from hanging when the GFS2 file systems are unmounted, do one of the following:
- Always use Pacemaker to manage the GFS2 file system as a cluster resource.
- If a GFS2 file system has been mounted manually with the mount command, be sure to unmount the file system manually with the umount command before rebooting or shutting down the system.
If the file system hangs while it is being unmounted during system shutdown under these circumstances, perform a hardware reboot. It is unlikely that any data will be lost since the file system is synced earlier in the shutdown process.
The basics of mounting a GFS2 file system are identical to that of any other regular file system:
# mount [-t gfs2] [-o mount_options] [blockdevice] [mountpoint]
To use POSIX ACLs (with getfacl and setfacl) on a GFS2 file system, it will need to be mounted with the acl mount option. A line for a GFS2 file system in /etc/fstab might look something like this:
# cat /etc/fstab /dev/clustererdvg/lv_gfs2 /mouhtpoint gfs2 acl 0 0
Note
It is critical that the last column of any entry in /etc/fstab (the fs_passno column) is set to
In general, it is not a recommended practice to add GFS2 file systems to /etc/fstab. They should, by preference, be mounted at boot as a cluster resource by Pacemaker. If mounted for a testing purpose outside Pacemaker, it is better to manually mount the GFS2 file system and then manually unmount it from all nodes when no longer needed, prior to shutdown of the cluster nodes.
Managing a GFS2 File System – Adding journals tp GFS2, extending and repairing GFS2
How to configure a GFS2 Pacemaker cluster resource