There are times when an administrator needs to temporarily suspend resource hosting of a cluster node without disrupting regular cluster operation. This can happen, for example, when a critical security update needs to be applied for the hosted resource. The update can be applied after putting node by node into standby mode, resulting in reduced downtime. A different use case is to test resource migration. When a node is put in standby mode, it does not get any resources assigned. Resources that are currently running on the node are migrated to another node.
The pcs cluster standby command can put the local node in standby mode. The pcs cluster standby command can put a remote node that is provided as a parameter or all nodes with the –all option in standby mode. To put the remote cluster member, geeklab.example.com, into standby mode, run:
# pcs cluster standby geeklab.example.com
When using the –all switch, all nodes in the cluster are put in standby mode.
# pcs cluster standby --all
The resource constraint that has been applied by putting a node in standby can be removed with pcs cluster unstandby. Without additional options or parameters, the resource constraint is removed from the local node. A remote cluster member can be provided as a parameter, or the –all switch to allow the hosting of resources again on the remote node or all cluster nodes, respectively. Removing the resource constraint does not necessarily mean that a resource that was previously running on a node before it was put in standby mode migrates back. To remove the standby resource constraint from all nodes in the current cluster, run:
# pcs cluster unstandby --all
Reviewing cluster status
For a system administrator, it is important to be able to retrieve the current status of the cluster, the cluster nodes, and the cluster resources.
A detailed overview of the cluster status, corosync status, configured resource groups, resources, and status of the cluster nodes is provided by pcs status.
The pcs status output can be limited with one of the following parameters:
|pcs status cluster||Show only information related to cluster status.|
|pcs status groups||Show only the configured resource groups and their resources.|
|pcs status resources||Show only the status of the resource groups and their individual resources in the cluster.|
|pcs status nodes||Show only the status of the configured cluster nodes.|
|pcs status corosync||Show only the status of corosync.|
|pcs status pcsd||Show only the status of pcsd on all configured cluster nodes.|
The pcs status command is a powerful utility that enables a system administrator to determine the status of cluster node membership and display all information related to the cluster and cluster nodes:
# pcs status Cluster name: cluster Last updated: Fri Sep 26 05:47:40 2014 Last change: Wed Sep 24 06:19:49 2014 via cibadmin on nodea.private.example.com Stack: corosync Current DC: nodeb.private.example.com (2) - partition with quorum Version: 1.1.10-29.el7-368c726 4 Nodes configured 6 Resources configured Node nodeb.private.example.com (2): standby Online: [ nodea.private.example.com nodec.private.example.com] OFFLINE: [ noded.private.example.com] Full list of resources: fence_nodea (stonith:fence_rht): Started nodea.private.example.com fence_nodeb (stonith:fence_rht}: Started nodeb.private.example.com fence_nodec (stonith:fence_rht): Started nodec.private.example.com fence_noded (stonith:fence_rht): Started noded.private.example.com Resource Group: web floatingip (ocf::heartbeat:IPaddr2): Started nodea.private.example.com website (ocf::heartbeat:apache) : Started nodea.private.example.com PCSD Status: nodea.private.example.com: Online nodeb.private.example.com: Online nodec.private.example.com: Online noded.private.example.com: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
In the previous example, the cluster consists of four cluster nodes with the following status:
- The cluster node nodeb.private.example.com is in standby mode.
- The cluster nodes nodea.private.example.com. and nodec.private.example.com are fully operational and participating in the cluster, and therefore marked as Online.
- The cluster node noded. private. example. com is running because the status of pcsd is Online. The cluster node is marked as OFFLINE because the cluster services have either been stopped on this cluster node or failed to communicate with the quorate part of the cluster.
Beginner Guide to RHEL 7 high-availability cluster – Architectural Overview