It sounds like you are asking if there should be a shared /home, which you do not need. You do need to ensure a user can access the environment for the node (a home directory, ssh keys, etc).

If you are asking about the job binary and the data it will be processing, again, you do not. You could, for example, install the binary on all the nodes.

If your job fetches its own data to work on (say a script that will download/prep .grib files and then run wrf) then there is no need for a shared filesystem.

You will, of course, need to stage the results out somewhere as well to access them outside the cluster.


Brian Andrus


On 6/19/2020 5:04 AM, David Baker wrote:
Hello,

We are currently helping a research group to set up their own Slurm cluster. They have asked a very interesting question about Slurm and file systems. That is, they are posing the question -- do you need a shared user file store on a Slurm cluster?

So, in the extreme case where this is no shared file store for users can slurm operate properly over a cluster? I have seen commands like sbcast to move a file from the submission node to a compute node, however that command can only transfer one file at a time. Furthermore what would happen to the standard output files? I'm going to guess that there must be a shared file system, however it would be good if someone could please confirm this.

Best regards,
David


Reply via email to