12:57 PM, Daniel Ruiz Molina <1daniel.r...@caos.uab.es>
wrote:
Hi Miguel,
No, I don't need to connect to SLURM clusters, so I have only one
slurmctld. What I need is connect some computers from "1.1.1.0"
network to a HPC Clusters in "1.1.2.0"
So my question (or
doubt) is what I need to share between two networks for allowing submit
jobs from one side to the other side.
El 12/06/2017 a las 7:47, Miguel Gila escribió:
Hola Daniel,
My replies inline below.
On 9 Jun 2017, at 18:21, Daniel Ruiz Molina wrote:
Hello,
I have this sc
Hello,
I have this scenario and I need to know if it would be possible for
SLURM to work.
I have some computers that will act as "submit" SLURM hosts; in other
words, in that hosts won't run batch or interactive jobs. That computers
are configured with a network as 1.1.1.0/24 for example.
O
Hello,
I'm migrating from SGE 6.2u5 to SLURM 16.06. In SGE I had configured
ARCo for exporting accounting and reporting and then, from a centralized
server, I could execute some SQL queries and generate PDFs or Excel.
Now, I would like to know if there is some tool similar to that.
Thanks.
Hello,
I have reconfigured slurm:
* slurmd.conf: NodeName=mynode CPUs=8 SocketsPerBoard=1
CoresPerSocket=4 ThreadsPerCore=2 RealMemory=7812 TmpDisk=50268
Gres=gpu:2 (without specify gpu model)
* gres.conf: two separate lines:
NodeName=mynode Name=gpu Count=1
Hello,
I would like to know if it is possible configure a compute node for
sharing 2 different GPUs.
According to my configuration files:
* slurm.conf has a line like this "NodeName=mynode CPUs=8
SocketsPerBoard=1 CoresPerSocket=4 ThreadsPerCore=2
RealMemory=7812 T
Hi,
I'm adding user to accounts in accounting information. However, some
users in my system have capital letters and when I try to add them to
their account, sacctmgr returns this message: "There is no uid for user
'MY_USER' Are you sure you want to continue?".
Then, if I click "y", user is a
Hi,
In my GPU cluster, slurmd daemon doesn't start correctly because when
daemon start, it doesn't find /dev/nvidia[0-1] device (mapped in
gres.conf). For solving this problem, I have added attribute
"ExecStartPre=@/usr/bin/nvidia-smi >/dev/null" in service file and now
daemon starts correc
Hello,
I have noticed that in all my cluster (running CentOS-7.x x886_64),
slurmd daemon doesn't start automatically, even though it is
configured as "enabled" at boot time.
My slurmd systemd file is:
# /usr/lib/systemd/system/slurmd.service
[Unit]
Description=Slurm
Hello,
I would like to know how I can submit a parallel job with fill-up policy
(not round-robin). For example, I want to run 12 "mpihelloworld" in two
computers, one socket per computer, 4 cores per socket, 2 thread per
core, but I want to fill first a computer (8 tasks) and, then, take a
s
en 1
wrote:
On 11/29/2016 12:27 PM, Daniel Ruiz Molina wrote:
I would like to know if it would be possible in SLURM configure two
partition, composed by the same nodes, but one for using with GPUs and
the other one only for OpenMPI. This configu
Hi,
I would like to know if it would be possible in SLURM configure two
partition, composed by the same nodes, but one for using with GPUs and
the other one only for OpenMPI. This configuration was allowed in Sun
Grid Engine because GPU resource was assigned to the queue and to the
compute n
Hi,
I'm new in SLURM. I'm trying to configure my small cluster. One
compute node has 1 GPU for compute. In my slurmd.conf file I have
added following parameters:
[...]
NodeName=my_compute_node CPUs=8 SocketsPerBoard=1 CoresPerSocket=4
ThreadsPerCore=2 RealMemory=7812 T
13 matches
Mail list logo