.
sinfo -h -p $1 -s
But, this sinfo command returned no result.
Regards,
Chansup
On Fri, Apr 3, 2020 at 1:28 AM Marcus Wagner
wrote:
> Hi Chansup,
>
> could you provde a code snippet?
>
> Best
> Marcus
>
> Am 02.04.2020 um 19:43 schrieb CB:
> > Hi,
> >
> &g
Hi,
I'm running Slurm 19.05.
I'm trying to execute some Slurm commands from the Lua job_submit script
for a certain condition.
But, I found that it's not executed and return nothing.
For example, I tried to execute a "sinfo" command from an external shell
script but it didn't work.
Does Slurm
.
>
> So why not make another partition encompassing both sets of nodes?
>
> > On Mar 23, 2020, at 10:58 AM, CB wrote:
> >
> > Hi Andy,
> >
> > Yes, they are on teh same network fabric.
> >
> > Sure, creating another partition that encompass all of th
Andy
>
>
>
> *From:* slurm-users [mailto:slurm-users-boun...@lists.schedmd.com] *On
> Behalf Of *CB
> *Sent:* Monday, March 23, 2020 11:32 AM
> *To:* Slurm User Community List
> *Subject:* [slurm-users] Running an MPI job across two partitions
>
>
>
> Hi,
>
&
Hi,
I'm running Slurm 19.05 version.
Is there any way to launch an MPI job on a group of distributed nodes from
two or more partitions, where each partition has distinct compute nodes?
I've looked at the heterogeneous job support but it creates two-separate
jobs.
If there is no such
nessee Tech University
>
> > On Mar 4, 2020, at 2:05 PM, CB wrote:
> >
> > External Email Warning
> > This email originated from outside the university. Please use caution
> when opening attachments, clicking links, or responding to requests.
> > Hi,
> >
>
Hi,
I'm running Slurm 19.05.5.
I've tried to write a job submission script for a heterogeneous job
following the example at https://slurm.schedmd.com/heterogeneous_jobs.html
But it failed with the following error message:
$ sbatch new.bash
sbatch: error: Invalid directive found in batch
gt; can still achieve the same functionality using exported environment
> variables as per the mpirun man page, like this:
>
> OMPI_MCA_rmaps_base_mapping_policy=node srun --export
> OMPI_MCA_rmaps_base_mapping_policy ...
>
> in you sbatch script.
>
> Brian
Hi Everyone,
I've recently discovered that when an MPI job is submitted with the
--exclusive flag, Slurm fills up each node even if the --ntasks-per-node
flag is used to set how many MPI processes is scheduled on each node.
Without the --exclusive flag, Slurm works fine as expected.
Our system
Hi,
We've recently upgraded to Slurm 17.11.7 from 16.05.8.
We noticed that the environment variable, HOSTNAME, does not refelct the
compute node with an interactive job using the salloc/srun command.
Instead it still points to the submit hostname although .SLURMD_NODENAME
reflects the correct
10 matches
Mail list logo