How exactly to I link "SLURM's implementation of the PMI library" with my
executable ?
Which path must i give ?
The documentation just mentions mpicc -L -lpmi ...
I don't understand what exactly I should write in
On Tue, Nov 26, 2013 at 7:06 PM, Jonathan Perkins <
perki...@cse.ohio-state.edu> wr
Terrific, thanks Moe!
On 11/26/13 18:00, Moe Jette wrote:
These changes are already in the next major release of Slurm (v14.03).
Installations instructions will be included in the next update of our
web pages (likely within a month or two). Thanks!
Moe
Quoting Jason Bacon :
FYI, SLUR
Thanks Jeff but the log file path is at the discretion of the user, it's
unlikely to be in the directory that contains the batch script (command
file), or the workdir. It's also not specified in the command file as
these scripts can be templates that are submitted multiple times.
On Tue, 2013-11
> Is there any way in slurm to retrieve the std_out filename from a
> running job?
Hi Franco
A work-around might be a script that uses the output of `scontrol show job
` and either (1) grab the --output or -o setting from the Command
script (if available; or check the script from SlurmdSpoolDir
On Tue, 2013-11-26 at 15:09 -0800, Moe Jette wrote:
> Quoting Franco Broi :
>
> >
> > Hi
> >
> > Is there any way in slurm to retrieve the std_out filename from a
> > running job?
>
> That is in the next release (14.03, March 2014)
Excellent but can't wait until March. I assume there's a pre-r
These changes are already in the next major release of Slurm (v14.03).
Installations instructions will be included in the next update of our
web pages (likely within a month or two). Thanks!
Moe
Quoting Jason Bacon :
FYI, SLURM is now included in the official FreeBSD ports collection.
Quoting Franco Broi :
Hi
Is there any way in slurm to retrieve the std_out filename from a
running job?
That is in the next release (14.03, March 2014)
I've also noticed that the load average is missing from the node
information in the Perl API, I've modified my own version to get it
wor
Hi All,
I have few questions about the SLUM, I tried to search on mailing list and
documentation but did't find convincing answer.
I have jobscript which looks like:
#!/bin/bash -l
#SBATCH --job-name="test"
#SBATCH --nodes=64
#SBATCH --ntasks-per-node=1
#SBATCH --ntasks=64
#SBATCH --exclusive
#
Well,
I'm almost there, but would appreciate one last tip.
I was (partially) wrong with the previous explanation about why I was
not getting CUDA_VISIBLE_DEVICES at prologue time. If I submit a job
asking for 2 nodes, 1 GPU card each, it happens that the variable is
defined on node 2 (again at pr
This depends on your cluster's setup. If you built your MPI application on
a shared filesystem that is available on each node then you do not need to
broadcast the executables around and should be able to use the srun command
directly.
One other thing that you need to keep in mind is that the sup
To run a job with MVAPICH2 under SLURM, I configured SLURM with
./configure --with-pm=no --with-pmi=slurm
Then, I have a doubt with compiling MPI jobs. The SLURM documentation
mentions that we need to link the slurm_lib with the executable during the
compilation step
mpicc -L -lpmi HelloWorld.c
I
11 matches
Mail list logo