[slurm-dev] Re: Job temporary directory

2017-01-20 Thread Michael Di Domenico

On Fri, Jan 20, 2017 at 11:16 AM, John Hearns  wrote:
> As I remember, in SGE and in PbsPro a job has a directory created for it on
> the execution host which is a temporary directory, named with he jobid.
> you can define int he batch system configuration where the root of these
> directories is.
>
> On running srun env, the only TMPDIR I see is /tmp
> I know - RTFM.  I bet I haven't realised that this si easy to set up...
>
> Specifically I would like a temprary job directory which is
> /local/$SLURM_JOBID
>
> I guess I can create this in the job then delete it, but it would be cleaner
> if the batch system deleted it and didnt allow for failed jobs or bad
> scripts leaving it on disk.

this has come up on the list a few times over the years that i can
recall, but i don't have specific pointers.  there were some pretty
fancy scripts that slurm could run to create a scratch space
allocation on the local node.  search back through the history


[slurm-dev] Re: Job temporary directory

2017-01-20 Thread David Lee Braun

You can do it the same way in Torque/Maui


/var/slurm/prolog.sh


/export/SLURM/bin/scontrol --oneliner show job=$SLURM_JOB_ID >>
/var/slurm/records/jobs.log


SLURM_LOCAL_SCRATCH="/scratch/$SLURM_JOB_USER/$SLURM_JOBID"
SLURM_REMOTE_SCRATCH="/export/lustre/$SLURM_JOB_USER/$SLURM_JOBID"

export SLURM_LOCAL_SCRATCH SLURM_REMOTE_SCRATCH

/bin/mkdir -p $SLURM_LOCAL_SCRATCH
/bin/mkdir -p $SLURM_REMOTE_SCRATCH
/bin/chown $SLURM_JOB_USER $SLURM_REMOTE_SCRATCH
/bin/chown $SLURM_JOB_USER $SLURM_LOCAL_SCRATCH
/bin/chmod 705 $SLURM_REMOTE_SCRATCH/../../$SLURM_JOB_USER
/bin/chmod 705 $SLURM_LOCAL_SCRATCH/../../$SLURM_JOB_USER
/bin/chmod 700 $SLURM_REMOTE_SCRATCH
/bin/chmod 700 $SLURM_LOCAL_SCRATCH

/var/slurm/epilog.sh

   SLURM_LOCAL_SCRATCH="/scratch/$SLURM_JOB_USER/$SLURM_JOBID"
SLURM_REMOTE_SCRATCH="/export/lustre/$SLURM_JOB_USER/$SLURM_JOBID"

export SLURM_LOCAL_SCRATCH SLURM_REMOTE_SCRATCH

/bin/rm -rf $SLURM_LOCAL_SCRATCH
/bin/rm -rf $SLURM_REMOTE_SCRATCH


/etc/profile.d/slurm.sh

if [ -n "$SLURM_JOB_NAME" ];  then
export SLURM_LOCAL_SCRATCH=/scratch/$USER/$SLURM_JOBID
export SLURM_REMOTE_SCRATCH=/export/lustre/$USER/$SLURM_JOBID

fi

set autologout=0


Cheers,

David

On 01/20/2017 11:15 AM, John Hearns wrote:
> As I remember, in SGE and in PbsPro a job has a directory created for it
> on the execution host which is a temporary directory, named with he jobid. 
> you can define int he batch system configuration where the root of these
> directories is.
> 
> On running srun env, the only TMPDIR I see is /tmp
> I know - RTFM.  I bet I haven't realised that this si easy to set up...
> 
> Specifically I would like a temprary job directory which is
> /local/$SLURM_JOBID
> 
> I guess I can create this in the job then delete it, but it would be
> cleaner if the batch system deleted it and didnt allow for failed jobs
> or bad scripts leaving it on disk.
> Any views or opinions presented in this email are solely those of the
> author and do not necessarily represent those of the company. Employees
> of XMA Ltd are expressly required not to make defamatory statements and
> not to infringe or authorise any infringement of copyright or any other
> legal right by email communications. Any such communication is contrary
> to company policy and outside the scope of the employment of the
> individual concerned. The company will not accept any liability in
> respect of such communication, and the employee responsible will be
> personally liable for any damages or other liability arising. XMA
> Limited is registered in England and Wales (registered no. 2051703).
> Registered Office: Wilford Industrial Estate, Ruddington Lane, Wilford,
> Nottingham, NG11 7EP

-- 
David Lee Braun
Manager of Computational Facilities
for Dr Charles L. Brooks, III Ph.D.
930 N. University Ave
Chemistry 2006


[slurm-dev] Re: Job temporary directory

2017-01-22 Thread Lachlan Musicman
We use the SPANK plugin found here

https://github.com/hpc2n/spank-private-tmp

and find it works very well.

--
The most dangerous phrase in the language is, "We've always done it this
way."

- Grace Hopper

On 21 January 2017 at 03:15, John Hearns  wrote:

> As I remember, in SGE and in PbsPro a job has a directory created for it
> on the execution host which is a temporary directory, named with he jobid.
> you can define int he batch system configuration where the root of these
> directories is.
>
> On running srun env, the only TMPDIR I see is /tmp
> I know - RTFM.  I bet I haven't realised that this si easy to set up...
>
> Specifically I would like a temprary job directory which is
> /local/$SLURM_JOBID
>
> I guess I can create this in the job then delete it, but it would be
> cleaner if the batch system deleted it and didnt allow for failed jobs or
> bad scripts leaving it on disk.
> Any views or opinions presented in this email are solely those of the
> author and do not necessarily represent those of the company. Employees of
> XMA Ltd are expressly required not to make defamatory statements and not to
> infringe or authorise any infringement of copyright or any other legal
> right by email communications. Any such communication is contrary to
> company policy and outside the scope of the employment of the individual
> concerned. The company will not accept any liability in respect of such
> communication, and the employee responsible will be personally liable for
> any damages or other liability arising. XMA Limited is registered in
> England and Wales (registered no. 2051703). Registered Office: Wilford
> Industrial Estate, Ruddington Lane, Wilford, Nottingham, NG11 7EP
>


[slurm-dev] Re: Job temporary directory

2017-01-22 Thread Christopher Samuel

On 23/01/17 08:40, Lachlan Musicman wrote:

> We use the SPANK plugin found here
> 
> https://github.com/hpc2n/spank-private-tmp
> 
> and find it works very well.

+1 to that, though we had to customise it to our environment (it breaks
when your nodes are diskless and your scratch area is a high-performance
parallel filesystem shared across all nodes).

https://github.com/vlsci/spank-private-tmp

All the best,
Chris
-- 
 Christopher SamuelSenior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/  http://twitter.com/vlsci