Here is what we use as TaskProlog. Not very much, but maybe it helps.

(Setting up / wiping the directories does not happen in TaskProlog,
but in Prolog/Epilog.)

-----

#!/bin/bash
echo "export WORK_LOCAL=/work/local/$SLURM_JOBID"
echo "export TMP=/work/local/$SLURM_JOBID/tmp"
echo "export HPC_LOCAL=/work/local/$SLURM_JOBID/HPC"

# Without that, it was not possible to start more than one OpenMPI
# process per node when using srun directly on the login nodes.
echo "export OMPI_MCA_orte_tmpdir_base=/work/local/$SLURM_JOBID/tmp"


2017-03-28 12:18 GMT+02:00 Ole Holm Nielsen <ole.h.niel...@fysik.dtu.dk>:
>
> We need to setup the job environment beyond what's inherited from the job
> submission on the login nodes.  The Slurm TaskProlog script example in
> https://slurm.schedmd.com/faq.html#task_prolog is the only example I've been
> able to find on the net.
>
> Question: Does anyone have some good examples of TaskProlog and TaskEpilog
> scripts which they can share with the community?
>
> Added information: We want to set up an environment variable CPU_ARCH taking
> hard-coded text values such as "broadwell", "haswell", etc.  On the login
> nodes we do this with a script in /etc/profile.d/ but this is ignored in
> tasks started by slurmd.
>
> Other useful TaskProlog tasks could be to set up scratch directories for
> jobs and wipe them again in TaskEpilog.  Does anyone have good scripts for
> this?
>
> Thanks a lot,
> Ole
>
> --
> Ole Holm Nielsen
> Department of Physics, Technical University of Denmark

Reply via email to