Hi, all:

 

Recently we found some strange log in slurmctld.log about node not
responding, such as:

[2022-07-09T03:23:10.692] error: Nodes node[128-168,170-178] not responding

[2022-07-09T03:23:58.098] Node node171 now responding

[2022-07-09T03:23:58.099] Node node165 now responding

[2022-07-09T03:23:58.099] Node node163 now responding

[2022-07-09T03:23:58.099] Node node172 now responding

[2022-07-09T03:23:58.099] Node node170 now responding

[2022-07-09T03:23:58.099] Node node175 now responding

[2022-07-09T03:23:58.099] Node node164 now responding

[2022-07-09T03:23:58.099] Node node178 now responding

[2022-07-09T03:23:58.099] Node node177 now responding

Meanwhile, checking slurmd.log and nhc.log on those node all seem to be ok
at the reported timepoint.

So we guess it's slurmctld launch some detection towards those compute node
and didn't get response, thus lead to slurmctld thinking those node to be
not responding.

Then the question is what detect action do slurmctld launched? How did it
determine whether a node is responsive or non-responsive?

And is it possible to customize slurmctld's behavior on such detection, for
example wait timeout or retry count before determine the node to be not
responding?

Reply via email to