• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

The Geek Diary

CONCEPTS | BASICS | HOWTO

  • OS
    • Linux
    • CentOS/RHEL
    • Solaris
    • Oracle Linux
    • Linux Services
    • VCS
  • Database
    • oracle
    • oracle 12c
    • ASM
    • mysql
    • MariaDB
    • Data Guard
  • DevOps
    • Docker
    • Shell Scripting
  • Interview Questions
  • Big Data
    • Hadoop
    • Cloudera
    • Hortonworks HDP

Managing Resource Startup Order in Pacemaker Cluster (Managing Constraints)

By admin

In this post, lets see how resource groups can be used to control resource startup order by configuring an active/passive NFS server resource group.

What are constraints?

Constraints are restrictions that determine the order in which resources can be started and stopped, on which nodes they can run, or which other resources they can share the node with. Resource groups provide an easy implicit ordering constraint configuration that is sufficient for many use cases, as they provide a convenient shortcut to setting up ordering constraints. Resources that are part of the same resource group:

  • Start in the defined sequence.
  • Stop in the reverse order.
  • Always run on the same cluster node.

Configuring an active/passive NFS resource group

For providing a highly available NFS service, it is mandatory that all required resources run on the same cluster node. The cluster resources required for an active/passive NFS server must start services in a particular order. These requirements can be fulfilled by setting up the resources required for the NFS service to execute in the following order:

  1. Start the Filesystem resource, which mounts the file system to export from shared storage.
  2. Start the nfsserver resource, which controls the NFS system service.
  3. Start one or more exportfs resource(s), respot)Sible for exporting the NFS shared file systems.
  4. Start the IPaddr2 floating IP address resource.

By placing these resources in one resource group in the correct order, automatic constraints allow the service to function correctly. Creating a highly available NFS export with the Red Hat High Availability Add-on requires the following procedure:

1. First, a shared file system resource, such as an iSCSI partition, needs to be created and formatted with either an XFS or an EXT4 file system. The firewall must allow connections to an NFSv4 server on all cluster nodes that will run the resource group that provides the NFSv4 share.

2. The Filesystem resource must be started before and stopped after the nfsserver resource. This is important for stopping the resource group, because the file system cannot be unmounted if the NFS server is still running and there are NFSv4 leases active. To create the Filesystem resource named nfsfs, with the XFS-formatted shared storage device/dev/sdb1 and the mount point directory /myshare as part of the mynfs resource group, execute:

# pcs resource create nfsfs Filesystem device=/dev/sdb1 directory=/ myshare fstype=xfs --group mynfs

3. The NFS server resource is started after the Filesystem resource. An nfs_shared_infodir on the shared storage is provided for the NFS server to maintain client data on the shared storage for failover recovery. To add the nfsserver resource to the resource group mynfs, with name nfssvc and the nfs_shared_infodir set to /nfsshare/nfsinfo, run:

# pcs resource create nfssvc nfsserver nfs_shared_infodir=/nfsshare/ nfsinfo --group mynfs

4. The exportfs resource agent allows for exporting one or more file systems as NFSv4 shares to a specified host. The exports must be started after the NFS server. To create an NFS root export of the /nfsshare directory named nfsfs1, mountable by the 172.18.20.15/32 client with rw, sync, no_root_squash options as part of the mynfs resource group, run:

# pcs resource create nfsfs1 exportfs clientspec:172.18.28.15/32 options=rw,sync,no_root_squash directory=/nfsshare fsid=8 --group mynfs

5. The NFS service requires an IP address on the public network for client access. For that, an IP address resource is required. The IP address resource must be started after all exportfs resources to ensure all NFS exports are available when a client successfully connects to the clustered NFS share. The IPaddr2 resource named nfsip with IP 172.16.20.83/24 as part of the mynfs resource group is created by executing:

# pcs resource create nfsip IPaddr2 ip=172,16,28.83 cidr_netmask=24 -­group mynfs

The resulting highly available NFS service can be mounted as an NFSv4 share on a client. In the event of a failover, the NFSv3 client may pause up to 5 seconds and the NFSv4 client may pause up to 90 seconds while the file locks are recovered.

Managing constraints

Constraints are rules that place restrictions on the order in which resources or resource groups may be started, or the nodes on which they may run. Constraints are important for managing complex resource groups or sets of resource groups, which depend upon one another or which may interfere with each other.

There are three main types of constraints:

  • Order constraints, which control the order in which resources or resource groups are started and stopped.
  • Location constraints, which control the nodes on which resources or resource groups may run.
  • Colocation constraints, which control whether two resources or resource groups may run on the same node.

Resource groups are the easiest way to set constraints on resources. All resources in a resource group implicitly have colocation constraints on each other that is, they must run on the same node. Resources that are members of a resource group also have order constraints on each other; they must start in the order in which they were added to the group, and they are stopped in the reverse of the startup order.

There are times when it is useful to configure explicit constraints on resources or resource groups manually. For example, it may be that two resource groups need to run on different cluster nodes, but start and stop their resources jndependently, or it may be that the cluster administrator wants a resource group to preferentially run on a particular cluster node if it is available. Explicit constraints can make these scenarios possible.

Note: Manual constraints should be set on resource groups as a whole and not on individual resources in the resource group, as the resource group may break or other unexpected effects may occur.

Configuring order constraints

Order constraints may be the simplest to understand. Order constraints mandate the order in which services must start. This may be important if, for example, the resource group for a high­availability PostgreSQL database, its IP addresses, and other resources must be started before the resource group for some high-availability service that accesses the database on startup.

To set an order constraint between two resources or resource groups:

# pcs constraint order A then B
Adding A B (kind: Mandatory) (Options: first-action=start then-action=start)

The preceding command sets a mandatory order constraint between the two resources or resource groups A and B. This has the following effects on the operation of these resources or resource groups:

  • If both resources are stopped, and A is started, then B is also allowed to start.
  • If both resources are running, and A is disabled by pcs, then the cluster will stop B before stopping A.
  • If both resources are running, and A is restarted, then the cluster will also restart B.
  • If A is not running and the cluster cannot start it, B will be stopped. This can happen if the first resource is misconfigured or broken, for example.

Viewing and removing constraints

Viewing constraints

The pcs constraint command (or pcs constraint list) can be used to view the current constraints set for the cluster’s resources. Using it with the –full option provides more detail, including the ID of the constraints.

# pcs constraint 
Location Constraints: 
Ordering Constraints: 
    start testip then start webfarm 
colocation constraints:
# pcs constraint --full 
Location constraints: 
Ordering Constraints: 
    start testip then start webfarm (Mandatory) (id:order-testip-webfarm-mandatory) 
Colocation Constraints: 

Removing constraints

The pcs constraint remove id command can be used to delete a constraint. The id can be obtained from the pcs constraint –full command.

# pcs constraint remove order-testip-webfarm-mandatory
# pcs constraint 
Location Constraints: 
Ordering Constraints: 
Colocation Constraints: 

Configuring location constraints

Location constraints are somewhat more complex than order constraints. A location constraint controls the node on which a resource or resource group will run. If no other influences or constraints apply, the cluster software will normally try to spread resources evenly around the cluster. A location constraint on a resource will cause it to have a preferred node or nodes. The node selected is influenced by the constraint’s score. The complexity arises in that the node chosen by the cluster also takes into effect the location preferences of resources with which the resource is collocated. If resource A must be collocated with resource B, and B can only run on node1, then it doesn’t matter that A prefers node2.

Score and score calculations

Score determines the node on which a resource will be started. The possible scores and their effects are:

  • INFINITY: The resource must run here.
  • Positive number or zero: The resource should run here.
  • Negative number: The resource should not run here (will avoid this node).
  • -INFINITY: The resource must not run here.

The resource will relocate to the node with the highest score that is available. If two nodes have the same score (for example, if two nodes have a score of INFINITY), then the cluster will start the resource on one of those two nodes. Generally, the cluster software will prefer a node that is not already running a resource.
If multiple constraints or scores apply, they are added together and the total score applies. If a score of INFINITY is added to a score of -INFINITY, the resulting score is -INFINITY. (That is, if a constraint is set so that a resource should always avoid a node, that constraint wins.)

If no node with a sufficiently high score can be found, the resource will not be started.

Viewing current scores

The crm_simulate -sl command will display the scores (-s) currently allocated to resources, resource groups, and stonith devices on the live cluster (-L). Since multiple scores may apply, care needs to be taken when interpreting the output of the command.

# crm_simulate -sL 
current cluster status: 
Online: [ nodea nodeb nodec ]
  fence_nodeb_virt       (stonith:fence_xvm):     Started nodea
  fence_nodea_virt       (stonith:fence_xvm}:     Started nodec
  Resource Group: webfarm 
      testip        (ocf::heartbeat:IPaddr2):     Started nodea 
      apache        (ocf::heartbeat:IPaddr2):     Started nodea
  fence_nodec_virt       (stonith:fence_xvm}:     Started nodea 

Allocation scores: 
native_color: fence_nodeb_yirt allocation score on nodea: 0 
native_color: fence_nodeb_virt allocation score on node2: 0 
native_color: fence_nodeb_virt allocation score on nodec: 0 
native_color: fence_nodea_virt allocation score on node1: 0 
native_color: fence_nodea_virt allocation score on nodeb: 0 
native_color: fence_nodea_virt allocation score on nodec: 0 
group_color: webfarm allocation score on nodea: 100 
group_color: webfarm allocation score on nodeb: 0 
group_color: webfarm allocation score on nodec: 0 
group_color: testip allocation score on nodea: 100 
group_color: testip allocation score on nodeb: 0 
group_color: testip allocation score on nodec: 0 
group_color: apache allocation score on nodea: 0 
group_color: apache allocation score on nodeb: 0
group_color: apache allocation score on nodec: 0 
native_color: testip allocation score on nodea: 100 
native_color: testip allocation score on nodeb: 0 
native_color: testip allocation score on nodec: 0 
native_color: apache allocation score on nodea: 0 
native_color: apache allocation score on nodeb: -INFINITY 
native_color: apache allocation score on nodec: -INFINITY 
native_color: fence_nodec_virt allocation score on nodea: 0 
native_color: fence_nodec_virt allocation score on nodeb: 0 
native_color: fence_nodec_virt allocation score on node3: 0

Transition Summary:

In the preceding example, a location constraint was set such that the webfarm resource group prefers nodea with a score of 100. The cluster started its testip resource there. Once that was running, the resource group automatically set the score for its apache resource to -INFINITY on all other nodes in the cluster so that it had to also start on nodea. (All resources in a resource group must run on the same cluster node.)

Note: The crm_simulate command can also be used as a troubleshooting tool to predict how resources will relocate given a particular cluster configuration and sequence of events. How to use the tool in this manner is beyond the scope of this chapter.

Setting location constraints

To set a mandatory location constraint:

# pcs constraint location id prefers node

The preceding command will set the resource or resource group id to have a score of INFINITY for running on node. If no other location constraints exist, this will force id to always run on node if it is up; otherwise, it will run on any other node in the cluster. The change will take effect immediately, relocating the resource if necessary.

A lower score can be set instead. If multiple nodes have different scores, the highest available will be used. This can be used to implement a priority ranking of preferred nodes for a resource or resource group.

For example, to set a score of 500 on a location constraint:

# pcs constraint location id prefers node=500

A resource or resource group can be told to avoid a particular node instead. The following command sets a location constraint with a score of -INFINITY, which will force the resource to run on another node if possible. (If no other nodes are available, the resource will refuse to start.)

# pcs constraint location id avoids node
Note: The ‘pcs resource move id node’ and ‘pcs resource ban id node’ commands set temporary location constraints. These constraints can be cleared with ‘pcs resource clear id’, but that command has no effect on location constraints set with pcs constraint.

Avoiding unnecessary resource relocation

Pacemaker assumes that resource relocation has no cost by default. In other words, if a higher­score node becomes available, Pacemaker will relocate the resource to that node. This can cause extra unplanned downtime for that resource, especially if it is expensive to relocate. (For example, the resource may take significant time to relocate.)

A default resource stickiness can be set that establishes a score for the node on which a resource is currently running. For example, assume the resource stickiness is set to 1000, and the resource’s preferred node has a location constraint with a score of 500. On resource start, it will run on the preferred node. If the preferred node crashes, the resource will move to one of the other nodes and a score of 1000 will be set for it on that node. When the preferred node comes back, it only has a score of 500 and the resource will not automatically relocate to the preferred node. The cluster administrator will need to manually relocate the resource to the preferred node at a convenient time (perhaps during a planned outage window).

To set a default resource stickiness of 1000 for all resources:

# pcs resource defaults resource-stickiness=1000

To view current resource defaults:

# pcs resource defaults

To clear the resource-stickiness setting:

# pcs resource defaults resource-stickiness=
Note: Note that resource stickiness for a resource group is calculated based on how many resources are running in that group. If a resource group has five active resources and resource stickiness is 1000, then the resource group has an effective score of 5000.

Configuring colocation constraints

Colocation constraints specify that two resources must (or must not) run on the same node. To set a colocation constraint to keep two resources or resource groups together:

# pcs constraint colocation add B with A

The preceding command will cause the resources or resource groups Band A to run on the same node. Note that this implies that Pacemaker will first figure out on which node A should live on, and only then see if B can live there as well. The starting of A and B occurs in parallel. If there isn’t a locati.on where both A and B can run, then A will get its preference and B will remain stopped. If there isn’t an available location for A, then B will also have nowhere to run.

A colocation constraint with a score of -INFINITY can also be set to force the two resources or resource groups to never run on the same node:

# pcs constraint colocation add B with A -INFINITY
Beginner Guide to RHEL 7 high-availability cluster – Architectural Overview

Filed Under: pacemaker

Some more articles you might also be interested in …

  1. How to Create and Configure Resources in a Pacemaker Cluster
  2. What is quorum in a pacemaker cluster (Understanding Quorum Operations)
  3. Troubleshooting Pacemaker Cluster Networking
  4. How to Configure Logical Volume Manager for Cluster File System
  5. How to Create and Configure Resource Groups in a Pacemaker Cluster
  6. Troubleshooting Resource Failures in Pacemaker Cluster
  7. How to Configure Cluster Logging for Corosync and Pacemaker
  8. How to Configure Multiple Fencing-device Levels in Pacemaker Cluster
  9. Beginner Guide to RHEL 7 high-availability cluster – Architectural Overview
  10. Prohibiting a cluster node from hosting resources in pacemaker cluster (putting a node into standby mode)

You May Also Like

Primary Sidebar

Recent Posts

  • How to Disable IPv6 on Ubuntu 18.04 Bionic Beaver Linux
  • How to Capture More Logs in /var/log/dmesg for CentOS/RHEL
  • Unable to Start RDMA Services on CentOS/RHEL 7
  • How to rename a KVM VM with virsh
  • Archives
  • Contact Us
  • Copyright

© 2021 · The Geek Diary