• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

The Geek Diary

CONCEPTS | BASICS | HOWTO

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • Linux Services
    • VCS
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
    • Data Guard
  • DevOps
    • Docker
    • Shell Scripting
  • Interview Questions
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

How to Create a GFS2 Formatted Cluster File System

By admin

Global File System 2 (GFS2)

Global File System 2 (GFS2) is a cluster file system interfacing directly with the kernel VFS layer. This means that the same file system can be mounted and used by multiple cluster nodes simultaneously, while still providing a full regular file system, including features such as support for POSIX ACLs, extended attributes, and quotas.

To accomplish this, every node accessing a GFS2 file system uses the cluster infrastructure provided by Corosync and Pacemaker to provide services such as fencing and locking. Each cluster node mounting a GFS2 file system will use a separate journal. If a node fails, one of the other nodes in the cluster will replay the journal for the failed node after the failed node has been fenced. To prevent race conditions between two nodes when accessing the file system, GFS2 uses the Distributed Lock Manager (DLM) to coordinate locks on files and directories.

Global File System 2 (GFS2) preparations

Before creating a GFS2 file system:

  • Make sure that the gfs2-utils package is installed on all of the cluster nodes.
  • Make sure that there is a clustered logical volume, accessible from all cluster nodes, on which to create the GFS2 file system.
  • Make sure that the clocks between all cluster nodes are synchronized (preferably with ntp or chronyd).

Furthermore, a few pieces of information are needed when creating a GFS2 file system:

  • The name of the cluster that will use the file system.
  • The number of nodes that will be accessing the file system simultaneously, including any that might be added in the future.
  • The name to use for the new file system.
  • The device node for the clustered logical volume that will be used to store the file system.
  • Creating a GFS2 file system

    Once all the prerequisites are in place, use the mkfs. gfs2 command from any one of the nodes to create a GFS2 file system. The most common options to mkfs.gfs2 are listed in the following table.

    mkfs.gfs2 options Description
    -t [lock_table_name] The name of the locking table (not used with lock_nolock). For lock_dlm this should be [cfustername]:[fs_name]. Only nodes that are a member of [clustername] will be allowed to mount this file system. [fs_name] should be a unique name to distinguish this file system between one and sixteen characters in length.
    -j [number_of_journals] The number of journals to create initially (more journals can be added later). Each node accessing a file system simultaneously will need one journal. This option will default to one journal if omitted.
    -J [journal_size] The size of the journals to be created in MiB. Journals will need to be at least 8 MiB. The journal size will default to 128 MiB if no size is given.

    For example, to create a GFS2 file system called examplegfs2, belonging to the examplecluster.cluster, with three 64 MiB journals, on the /dev/clusteredvg/lv_gfs clustered logical volume, use the following command:

    # mkfs.gfs2 -t examplecluster:examplegfs2 -j 3 -J 64 /dev/clusteredvg/lv_gfs

    When running the mkfs.gfs2 command to create a GFS2 file system, the size of the journals may be specified. If a size is not specified, it will default to 128 MB, which should be optimal for most applications. It is generally recommended to use the default journal size of 128 MB. If the file system is very small (for example, 5GB), having a 128 MB journal might be impractical. If you have a larger file system and can afford the space, using 256 MB journals might improve performance.

    Mounting a GFS2 file system

    Before mounting a GFS2 file system, the file system must exist, the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started. For testing purposes, the GFS2 file system can be mounted in the same way as any other typical Linux file system. For normal production operation, the GFS2 file system should be mounted by configuring it as a cluster resource.

    GFS2 file systems that have been mounted manually rather than automatically through Pacemaker will not be known to the system when file systems are unmounted at system shutdown. As a result, the GFS2 script will not unmount the GFS2 file system. After the GFS2 shutdown script is run, the standard shutdown process kills off all remaining user processes, including the cluster infrastructure, and tries to unmount the file system. This unmount will fail without the cluster infrastructure and the system will hang.

    To prevent the system from hanging when the GFS2 file systems are unmounted, do one of the following:

    • Always use Pacemaker to manage the GFS2 file system as a cluster resource.
    • If a GFS2 file system has been mounted manually with the mount command, be sure to unmount the file system manually with the umount command before rebooting or shutting down the system.

    If the file system hangs while it is being unmounted during system shutdown under these circumstances, perform a hardware reboot. It is unlikely that any data will be lost since the file system is synced earlier in the shutdown process.

    The basics of mounting a GFS2 file system are identical to that of any other regular file system:

    # mount [-t gfs2] [-o mount_options] [blockdevice] [mountpoint]

    To use POSIX ACLs (with getfacl and setfacl) on a GFS2 file system, it will need to be mounted with the acl mount option. A line for a GFS2 file system in /etc/fstab might look something like this:

    # cat /etc/fstab
    /dev/clustererdvg/lv_gfs2   /mouhtpoint   gfs2 acl 0 0

    Note

    It is critical that the last column of any entry in /etc/fstab (the fs_passno column) is set to 0 so that fsck.gfs2 is not run on boot. It is possible that the GFS2 file system is already mounted on another cluster node at boot. Running an fsck on a GFS2 file system that is currently mounted (even on another node) can lead to serious damage to the file system and data loss. For further discussion, see the fsck.gfs2(8) man page.

    In general, it is not a recommended practice to add GFS2 file systems to /etc/fstab. They should, by preference, be mounted at boot as a cluster resource by Pacemaker. If mounted for a testing purpose outside Pacemaker, it is better to manually mount the GFS2 file system and then manually unmount it from all nodes when no longer needed, prior to shutdown of the cluster nodes.

    Beginners Guide to Global File System 2 (GFS2)
    Managing a GFS2 File System – Adding journals tp GFS2, extending and repairing GFS2
    How to configure a GFS2 Pacemaker cluster resource

    Filed Under: CentOS/RHEL 6, CentOS/RHEL 7, Linux, pacemaker

    Some more articles you might also be interested in …

    1. UNIX / Linux : How crontab validates the access based on the cron.allow and cron.deny files
    2. CentOS / RHEL : How to prevent disabled repositories from being downloaded into the yum cache
    3. How to configure VNC Server on Oracle Linux 6
    4. Understanding the job control commands in Linux – bg, fg and CTRL+Z
    5. RHEL/CentOS 6,7 : How to caculate the size of hugepage used by a specific process/application
    6. How To Send Mails To an External User With Mailx on Linux
    7. “-bash: firewall: command not found” – How to resolve in CentOS/RHEL 7
    8. How to disable auto completion (tab completion) in bash shell
    9. Linux / UNIX : Examples of find command to find files with specific sets of permissions
    10. How to blacklist a local disk using the “find_multipaths” directive in CentOS/RHEL 6

    You May Also Like

    Primary Sidebar

    Recent Posts

    • MySQL: how to figure out which session holds which table level or global read locks
    • Recommended Configuration of the MySQL Performance Schema
    • MySQL: Identify what user and thread are holding on to a meta data lock that is preventing other queries from running
    • MySQL: How to kill a Long Running Query using max_execution_time
    • Archives
    • Contact Us
    • Copyright

    © 2021 · The Geek Diary