Hello,
I upgraded from 17.02.10 to 17.11.6 on EL6.9 and getting the errors below.
Database is on EL7 mariadb-5.5.
After yum update slurm:
# slurmdbd -D -vvv
slurmdbd: debug: Log file re-opened
slurmdbd: debug: Munge authentication plugin loaded
slurmdbd: debug2: mysql_connect() called for db s
https://www.schedmd.com/news.php?id=201#OPT_201
Chris,
I don't have support but was able to revert.
Thanks,
Tina
> On 05/07/2018 10:19 PM, Tina Fora wrote:
>> Hello,
>>
>> I upgraded from 17.02.10 to 17.11.5 on EL6.9 and getting the errors
>> below.
>> Database
Hello All,
I compiled slurm from standard rpmbuild. Upgrading from 17.02 to 17.11.9-2
is giving the error below. I'm not sure what the issue is with accounting
storage plugin because it seems to load it ok. On the mysql failed query I
tried to run it manually and it returns sql syntax error (full
ng the Slurm upgrade on a test server and not the production cluster?
>
> Did you check the steps mentioned in the thread "slurmdbd:
> mysql/accounting errors on 17.11.5 upgrade" which you initiated on May 7?
>
> /Ole
>
>
> On 14-08-2018 17:53, Tina Fora wrote:
>
Hi all,
We run mysql on a dedicated machine with slurmctld and slurmdbd running on
another machine. Now I want to add another machine running slurmctld and
slurmdbd and this machine with be on CentOS 7. Existing one is CentOS 6.
Is this possible? Can I run two seperate slurmctld and slurmdbd point
faster.
Cheers.
> On 2/7/19 1:48 pm, Tina Fora wrote:
>
>> We run mysql on a dedicated machine with slurmctld and slurmdbd running
>> on
>> another machine. Now I want to add another machine running slurmctld and
>> slurmdbd and this machine with be on CentOS 7. E
Hello,
Is there a way to force a job to run that is being held back for
QOSGrpCpuLimit? This is coming from QOS that we have in place. For the
most part it works great but every once in a while we have free nodes that
are idle and I'd like to force the job to run.
Tina
Hi,
I'm adding a bunch of memory on two of our nodes that are part of a blade
chassis. So two computes will be upgraded to 1TB RAM and the rest have
192GB. All of the nodes belog to several partitons and can be used by our
paid members given the partition below. I'm looking for ways to figure out
s
> are used up, or a job needing > 192GB is running on them.
>
> Brian Andrus
>
> On 9/4/2019 9:53 AM, Tina Fora wrote:
>> Hi,
>>
>> I'm adding a bunch of memory on two of our nodes that are part of a
>> blade
>> chassis. So two computes will be upgra
Hello Slurm user,
We have 'AccountingStorageEnforce=limits,qos' set in our slurm.conf. I've
added maxjobs=100 for a particular user causing havoc on our shared
storage. This setting is still not being enforced and the user is able to
launch 1000s of jobs.
I also ran 'scontrol reconfig' and even r
avid
>
> On Tue, Sep 17, 2019 at 2:06 PM Tina Fora wrote:
>
>> Hello Slurm user,
>>
>> We have 'AccountingStorageEnforce=limits,qos' set in our slurm.conf.
>> I've
>> added maxjobs=100 for a particular user causing havoc on our shared
>> s
11 matches
Mail list logo