On Friday, 23 June 2017, at 00:46:58 (-0600),
Ole Holm Nielsen wrote:
> The requirement of removing "=" must be a bug since "rpmbuild --help" says:
>
> --with= enable configure for build
> --without=disable configure for build
RPM binaries handle certain command-
As a reminder, Slurm User Group Meeting Abstract submissions are *due at
the end of this week* on *June 30, 2017*.
You are invited to submit an abstract of a tutorial, technical
presentation or site report to be given at the Slurm User Group Meeting
2017. This event is sponsored and organized
On 06/22/2017 01:34 PM, Ole Holm Nielsen wrote:
I'm announcing an updated version 0.50 of the node status tool "pestat"
for Slurm. I discovered how to obtain the node Free Memory with sinfo,
so now we can do nice things with memory usage!
Hi! thank you for the great tool! i don't know if t
Hi Ole,
It's possible that it was a temporary glitch, because all look OK to me now.
Hostname Partition Node Num_CPU CPUload Memsize Freemem Joblist
State Use/Tot (MB) (MB) JobId
User ...
devel-pcomp1 vtest* idle 0 12
Thanks Paul! Would you know the answer to:
Question 2: Can anyone confirm that the output "slurmdbd: debug2:
Everything rolled up" indeed signifies that conversion is complete?
Thanks,
Ole
On 06/26/2017 04:02 PM, Paul Edmon wrote:
Yeah, we keep around a test cluster environment for that p
Hi,
I'm currently doing an investigation between the SLURM and PBS/TORQUE
especially about their weakness and the strengths. Also, the different
services that provided in one of them and the other not. Finally, and the
important thing is the difference in their performance.
I would be very gratefu
Yeah, we keep around a test cluster environment for that purpose to vet
slurm upgrades before we roll them on the production cluster.
Thus far no problems. However, paranoia is usually a good thing for
cases like this.
-Paul Edmon-
On 06/26/2017 07:30 AM, Ole Holm Nielsen wrote:
On 06/
On 06/26/2017 01:24 PM, Loris Bennett wrote:
We have also upgraded in place continuously from 2.2.4 to currently
16.05.10 without any problems. As I mentioned previously, it can be
handy to make a copy of the statesave directory, once the daemons have
been stopped.
However, if you want to know
Hi Ole,
We have also upgraded in place continuously from 2.2.4 to currently
16.05.10 without any problems. As I mentioned previously, it can be
handy to make a copy of the statesave directory, once the daemons have
been stopped.
However, if you want to know how long the upgrade might take, then
We did it in place, worked as noted on the tin. It was less painful than I
expected. TBH, your procedures are admirable, but you shouldn't worry -
it's a relatively smooth process.
cheers
L.
--
"Mission Statement: To provide hope and inspiration for collective action,
to build collective powe
Hi Kilian,
Thanks for explaining how to configure ClusterShell correctly for Slurm!
I've updated my Wiki information in
https://wiki.fysik.dtu.dk/niflheim/SLURM#clustershell now.
I would suggest you to add your examples to the ClusterShell
documentation, where I feel it may be hidden or mi
On 23-06-2017 17:20, Belgin, Mehmet wrote:
One thing I noticed is that pestat reports zero Freemem until a job is
allocated on nodes. I’d expect it to report the same value as Memsize if
no jobs are running. I wanted to offer this as a suggestion since zero
free memory on idle nodes may be a b
We're planning to upgrade Slurm 16.05 to 17.02 soon. The most critical
step seems to me to be the upgrade of the slurmdbd database, which may
also take tens of minutes.
I thought it's a good idea to test the slurmdbd database upgrade locally
on a drained compute node in order to verify both
13 matches
Mail list logo