Hi Michael,
I'm not sure if this is right or not, I don't have much experience with
OpenMPI. On most slurm installs I believe the MPI type defaults to none,
have you tried adding --mpi=pmi2 or --mpi=openmpi to your srun command?
Ian
On Wed, Dec 16, 2015 at 9:50 AM, Michael Di Domen
Hello Hezi,
Have you tried making the shell for the batch script a login shell?
#!/bin/bash -l
I've had to do that in the past to make the module command behave on our
Centos 7.1 systems.
Ian
On Thu, Dec 3, 2015 at 1:30 AM, Hezi Ismah-Moshe wrote:
>
> Hi,
>
> When I try to l
Hi Thomas,
We use sacctmgr -i in our scripts to remove the prompt, works
great.
Thanks,
Ian
On Tue, Oct 13, 2015 at 10:53 AM, Thomas Orgis
wrote:
>
> Hi,
>
> is it possible to avoid sacctmgr insisting on confirmation on database
> changes? Specifically, with 14.11.8, this does n
? Do I need to issue some command across the
slaves (in the past I've found that manually relaunching the slurmd
application has helped "$ bpsh -a slurmd").
Thank you,
--
~ Ian Lee
Lawrence Livermore National Laboratory
(W) 925-423-4941
t across all
>> nodes.
>>
>> How do we handle parameters that are different on a per node basis?
>>
>> For example, RealMemory may be X on node 1, but Y on node 2.
>>
>> Am I missing something?
>>
>>
>>
>
--
Ian Logan
Virtualization and Unix Systems Administrator
Information and Communication Techologies - New Mexico State University
Phone: 575-646-3054 Email: i...@nmsu.edu
Thanks Kilian,
This looks like exactly what I was looking for.
~ Ian Lee
Lawrence Livermore National Laboratory
(W) 925-423-4941
-Original Message-
From: Kilian Cavalotti [mailto:kilian.cavalotti.w...@gmail.com]
Sent: Tuesday, October 07, 2014 9:33 PM
To: slurm-dev
Subject: [slurm-dev
natively, how would I configure SLURM so that I can do what I really want
to do which is to drain a node if the diskspace of a particular disk is
insufficient?
Thank you,
~ Ian Lee
Lawrence Livermore National Laboratory
(W) 925-423-4941=
first compute
node are two different host. That's why I can't see any output in the
control node.
Again, thank you for your reply!
Best regards,
Ian Malcolm
59:15] debug2: Tree head got back 1
[2014-01-04T16:59:15] debug2: Tree head got them all
[2014-01-04T16:59:16] debug2: node_did_resp SLURM1
miao@SLURM0:~$ srun -n2 -l hostname
1: SLURM2
0: SLURM1
miao@SLURM0:~$
Thank you very much!
Best regards
Ian Malcolm
vents from "strigger --get" and slurmctld.log.
Although the SLURM help page says that a check for trigger events is
performed every 15 seconds, nothing returns after that time ><"
The event handling program I used is simply to echo a message.
My SLURM version is 2.1.0.
Thanks
Ian
vents from "strigger --get" and slurmctld.log.
Although the SLURM help page says that a check for trigger events is
performed every 15 seconds, nothing returns after that time ><"
The event handling program I used is simply to echo a message.
Thanks
Ian
11 matches
Mail list logo