Hello!
I've read that in Version 16.05 there will be
kerberos support.
http://slurm.schedmd.com/news.html#1605
"Add Kerberos credential support including credential forwarding and refresh."
I couldn't find any explanation what this is exactly (auks, native ...)
I would be helpful if somebody
Dear all,
I'm trying to figure out how to configure a "cluster" with a single
computer (i.e., execution and master node is the same). After I
figure this out, I hope that setting up a cluster with multiple nodes
is not too difficult.
In particular, I think the default setting permits only a sin
2015-09-08 12:55 GMT+02:00 Raymond Wan :
>
> Dear all,
>
> I'm trying to figure out how to configure a "cluster" with a single
> computer (i.e., execution and master node is the same). After I
> figure this out, I hope that setting up a cluster with multiple nodes
> is not too difficult.
>
> In p
Am Tue, 08 Sep 2015 03:56:00 -0700
schrieb Raymond Wan :
> I *think* this is "impossible" to do since it would be hard to force
> users to write to one partition and not any others. But, I thought
> I'd ask anyway in case there is something within SLURM that I've
> missed. Any suggestions?
You
Hello Everyone,
After upgrading from 14.11.3 to 14.11.9 an 'scontrol show conf' crashes
the primary slurmctld daemon.
It is an abrupt end and nothing gets logged, the backup slurmctld on
another host does take over.
Has anyone come across this before?
Maybe our slurm.conf has a syntax err
Can you show us the stack using gdb ?
Thanks
/David/Bigagli
da...@schedmd.com
===
Slurm User Group Meeting, 15-16 September 2015, Washington D.C.
http://slurm.schedmd.com/slurm_ug_agenda.html
> On 08 Sep 2015, at 16:45, Marcin Sliwowsk
David,
First, thanks for looking into this.
I try to attach gdb to slurmctld while it is running and it complains
about "Missing separate debuginfos", the output is below.
The odd thing is that while I am attached to slurmctld, the daemon
appears unresponsive because various slurm commands
We have seen similar issues on 14.11.8 but haven't bothered to diagnose
or report it. I think I've seen it twice so far out of dozens of new users.
Ryan
On 09/07/2015 09:16 AM, Loris Bennett wrote:
Hi,
This problem occurs with 14.11.8.
A user I set up today got the following error when su
Hi,
I've noticed configure checks for json parser availability, however in
rhel6 based systems the json-c-devel rpm from epel(6) installs to
/usr/include/json while the configure check is for /usr/include/json-c
(configure:19424)
BTW, when building without packaging (i.e. tar xf
slurm-15
Dear Slurmers,
after updating to 15.08 we made a funny observation. From calling "squeue -r |
wc -l" it seemed not all jobs were submitted anymore for large arrays.
Let me illustrate:
$ sbatch
--array=25212,25213,25214,25216,25217,25218,25219,25220,25221,25222,25223,25268,25269,25270,25271,
Quoting Daniel Letai :
Hi,
I've noticed configure checks for json parser availability, however
in rhel6 based systems the json-c-devel rpm from epel(6) installs to
/usr/include/json while the configure check is for
/usr/include/json-c (configure:19424)
Fixed here:
https://github.com/Sc
Hi All,
We have a couple nodes with 8 Nvidia Titan X GPUs each. We have some software
that can run in parallel across GPUs, but performance is only good if the
inter-GPU communication stays on the PCI links of a single CPU socket.
Right now, the only thing I have been able to work reliab
Dear Thomas and Marcin,
Thank you both of you for your prompt replies!
On Tue, Sep 8, 2015 at 8:15 PM, Thomas Orgis
wrote:
> Am Tue, 08 Sep 2015 03:56:00 -0700
> schrieb Raymond Wan :
>
>> I *think* this is "impossible" to do since it would be hard to force
>> users to write to one partition
13 matches
Mail list logo