RTFM

[Read This Fine Material] from Joshua Hoblitt

How to fix Pacemaker pcs “Error: node ‘foo’ does not appear to exist in configuration”

| 0 comments

# pcs cluster unstandby pollux1
Error: node 'pollux1' does not appear to exist in configuration

I hit this error message from Pacemaker pcs util while trying to bring nodes out of standby after having put them into that state with the command pcs cluster standby foo, updating the OS from ~RHEL6.3 -> RHEL6.5, and then rebooting the nodes to bring them up on a newer kernel.

It appears that this is an bug in pcs and there are some details in this (subscriber only) Red Hat KB article: ‘pcs cluster standby ‘ fails with “Error: node ‘nodename’ does not appear to exist in configuration” in RHEL 6 with pacemaker.

According to the KB article, this problem is fixed in pcs-0.9.90-1.el6_4.1 and pcs-0.9.90-2.el6_5.2 but can be worked around by using the crm_standby util in place of pcs for the standby/unstandby operation.

Example from the KB article:

### Standby node
# crm_standby -v on -N
### Unstandby node
# crm_standby -D -N


The Pacemaker cluster that I experience this issue on is a little bit of odd duckling in that the core OS is official RHEL but I’m using the pacemaker/etc. package from SL repo’s ala the Clusterlabs’ quickstart guide.

This is the exact package that was installed on my cluster and it is quite obviously affected by the exact bug described in the RH KB.

# rpm -qa | grep pcs
pcs-0.9.90-2.el6.noarch

Demonstration of the error message and the workaround:

# pcs status
Cluster name: pollux
Last updated: Tue Apr  1 12:50:00 2014
Last change: Tue Apr  1 12:34:38 2014 via crmd on pollux3
Stack: cman
Current DC: pollux1 - partition with quorum
Version: 1.1.10-14.el6-368c726
3 Nodes configured
9 Resources configured


Node pollux1: standby
Node pollux2: standby
Node pollux3: standby

Full list of resources:

 p_ip_nfs1	(ocf::heartbeat:IPaddr2):	Stopped 
 p_ip_nfs2	(ocf::heartbeat:IPaddr2):	Stopped 
 p_ip_nfs3	(ocf::heartbeat:IPaddr2):	Stopped 
 Clone Set: clone_nfs [p_nfs]
     Stopped: [ pollux1 pollux2 pollux3 ]
 impi-fencing-pollux3	(stonith:fence_ipmilan):	Stopped 
 impi-fencing-pollux2	(stonith:fence_ipmilan):	Stopped 
 impi-fencing-pollux1	(stonith:fence_ipmilan):	Stopped 
# pcs cluster unstandby pollux1
Error: node 'pollux1' does not appear to exist in configuration
# crm_standby -D -N pollux1
Deleted nodes attribute: id=nodes-pollux1-standby name=standby

# crm_standby -D -N pollux2
Deleted nodes attribute: id=nodes-pollux2-standby name=standby

# crm_standby -D -N pollux3
Deleted nodes attribute: id=nodes-pollux3-standby name=standby

[root@pollux3 ~]# crm_mon -1
Last updated: Tue Apr  1 12:53:57 2014
Last change: Tue Apr  1 12:51:09 2014 via crm_attribute on pollux1
Stack: cman
Current DC: pollux1 - partition with quorum
Version: 1.1.10-14.el6-368c726
3 Nodes configured
9 Resources configured


Online: [ pollux1 pollux2 pollux3 ]

 p_ip_nfs1	(ocf::heartbeat:IPaddr2):	Started pollux1 
 p_ip_nfs2	(ocf::heartbeat:IPaddr2):	Started pollux1 
 p_ip_nfs3	(ocf::heartbeat:IPaddr2):	Started pollux1 
 Clone Set: clone_nfs [p_nfs]
     Started: [ pollux1 pollux2 pollux3 ]
 impi-fencing-pollux3	(stonith:fence_ipmilan):	Started pollux1 
 impi-fencing-pollux2	(stonith:fence_ipmilan):	Started pollux1 
 impi-fencing-pollux1	(stonith:fence_ipmilan):	Started pollux2 

Leave a Reply