I had similar problems in the past.

The 2 most common issues were:

1. Controller load - if the slurmctld was in heavy use, it sometimes didn't respond in timely manner, exceeding the timeout limit.

2. Topology and msg forwarding and aggregation.


For 2 - it would seem the nodes designated for forwarding are statically assigned based on topology. I could be wrong, but that's my observation, as I would get the socket timeout error when they had issues, even though other nodes in the same topology 'zone' were ok and could be used instead.


It took debug3 to observe this in the logs, I think.


HTH

--Dani_L.


On 6/11/19 5:27 PM, Steffen Grunewald wrote:
On Tue, 2019-06-11 at 13:56:34 +0000, Marcelo Garcia wrote:
Hi 

Since mid-March 2019 we are having a strange problem with slurm. Sometimes, the command "sbatch" fails:

+ sbatch -o /home2/mma002/ecf/home/Aos/Prod/Main/Postproc/Lfullpos/50.1 -p operw /home2/mma002/ecf/home/Aos/Prod/Main/Postproc/Lfullpos/50.job1
sbatch: error: Batch job submission failed: Socket timed out on send/recv operation
I've seen such an error message from the underlying file system.
Is there anything special (e.g. non-NFS) in your setup that may have changed
in the past few months?

Just a shot in the dark, of course...

Ecflow runs preprocessing on the script which generates a second script that is submitted to slurm. In our case, the submission script is called "42.job1". 

The problem we have is that sometimes, the "sbatch" command fails with the message above. We couldn't find any hint on the logs. Hardware and software logs are clean. I increased the debug level of slurm, to 
# scontrol show config
(..._)
SlurmctldDebug          = info

But still not glue about what is happening. Maybe the next thing to try is to use "sdiag" to inspect the server. Another complication is that the problem is random, so we put "sdiag" in a cronjob? Is there a better way to run "sdiag" periodically?

Thnaks for your attention.

Best Regards

mg.

- S


Reply via email to