Zoran,
Node reboot recovery is to be followed, when the system cannot recover from
the observed fault. For a fault like amfd crashing, node reboot can be
followed. But in the current scenario, upon reboot same configuration exists
and node shall go for reboot as opensafd is enabled in the run
I think, the discussion got deviated by the usage of PL string in nodes.cfg.
On the fist node in the opensaf cluster, the following info is filled up in
opensaf cfg files.
cat /usr/share/opensaf/immxml/nodes.cfg
SC node-1 node-1
SC node-2 node-2
PL node-3 node-3
PL node-4 node-4
PL node-5 nod
- **Milestone**: 4.7.2 --> 5.0.2
---
** [tickets:#2052] immtools: SC/PL field in nodes.cfg is not used**
**Status:** unassigned
**Milestone:** 5.0.2
**Created:** Tue Sep 20, 2016 09:41 AM UTC by Ritu Raj
**Last Updated:** Tue Sep 20, 2016 02:00 PM UTC
**Owner:** nobody
# Environment details
Hi,
I'm not playing a lot with nodes.cfg, but as I know, the first column tells if
a node is a system controller or a payload. Base on the first column, immxml
tools knows which template will be used.
The second column is AMF node name.
The third column is CLM node name.
AMF and CLM node don't
I want to add this one too:
So, if we start second node SC-2, it will failed to join the cluster
And both node will go for reboot
**and finally after reboot when node join back:
>>SC-2 will join with "ACTIVE" role and first node(PL-3) will join as
>>"QUIESCED"
Syslog of SC-2:
Sep 20 17:27:1
- **summary**: Controller able to join with invalid node_name --> immtools:
SC/PL field in nodes.cfg is not used
- **Type**: defect --> discussion
- **Comment**:
Had a discussion with ritu and Tagging this ticket as a discussion topic and
assigning to immtools.
The issue can be reproduced as be