r/local/bin/sshare
Look also for files under:
/usr/local/share/man/
Best regards,
--
Gennaro Oliva
option), the default location
for your installation is /usr/local, so you should find the plugins under
/usr/local/lib/slurm
Did you tried the slurm-wlm package shipped with ubuntu?
It comes with the mysql plugin.
Best regards
--
Gennaro Oliva
option. Am I missing anything?
I don't know: in my case, after the patch, the pmix v4 lib was detected,
and I was able to use it with a simple test, but it looks like more code
is needed for full support:
https://bugs.schedmd.com/show_bug.cgi?id=7263
Best
--
Gennaro Oliva
t be the issue here?
The patch solved the issue in my build env.
Best regards,
--
Gennaro Oliva
n with both salloc and sbatch
This bug was solved in version 20.02.6-2.
Best regards,
--
Gennaro Oliva
bin/bugreport.cgi?bug=954272
Best regars,
--
Gennaro Oliva
Best regards,
--
Gennaro Oliva
-configurator shipped with the slurmctld
package to create your configuration?
> - "scontrol reconfig" tends to kill the slurmctld
Please provide more details about when this is happening.
Best regards,
[1] https://bugs.schedmd.com/show_bug.cgi?id=4209
--
Gennaro Oliva
switches to serving the newest revision
> eventually. But this may take a while.
Can you do this "switch" inside a slurm job?
> A slurm job might depend on a
> certain revision of such a cvmfs repository.
In this way you can make these jobs depend on the switch job.
Best regards,
--
Gennaro Oliva
t doesn't
> build the /usr/lib/slurm/accounting_storage_mysql.so file.
On Debian it builds fine with default-libmysqlclient-dev installed.
Best regards,
--
Gennaro Oliva
ystems:
> SlurmdSpoolDir=/var/spool/slurmd
should be
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
> StateSaveLocation=/var/spool/slurmctld
should be
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
Best regards,
--
Gennaro Oliva
ons like Ubuntu, for amd64 systems, slurm plugins are stored
under:
/usr/lib/x86_64-linux-gnu
This value is already hardcoded in the binary so you can omit it in
the configuration file.
Best regards,
--
Gennaro Oliva
es
slurmdbd.conf and slurm.conf changing the sensitive data inside.
Regards,
--
Gennaro Oliva
Hi Alan,
On Fri, Sep 28, 2018 at 09:42:23PM +, Alan Do-Omri wrote:
> And Eli, thank you for your help! Yes, the munge service is running. I
> am on Ubuntu 16.04.10.
can you please send the output of the following command?
dpkg -l | grep slurm
Best regards
--
Gennaro Oliva
/cgroup
>
> The partition is configured like this
> PartitionName=long Nodes=marzano[05-13] PriorityTier=30 Default=NO
> MaxTime=5-0 State=UP OverSubscribe=FORCE:1
>
> We are using slurm 16.05.6 on Ubuntu 14.04 LTS
Did you add "cgroup_enable=memory swapaccount=1" to the kernel command
line as suggested here:
https://slurm.schedmd.com/cgroups.html
Best regards,
--
Gennaro Oliva
> ‘slurm-wlm-necla_16.05.4-1_amd64.deb’. As noted before, this contained many
> .so files, but not ‘pam_slurm.so’.
I thought you were using a debian source package, sorry; in that
case debuild would have complained about the missing libpam0g-dev
dependency. I'm glad you solved.
Best regards,
--
Gennaro Oliva
Hi Chris,
On Sat, May 05, 2018 at 12:05:42AM +1000, Chris Samuel wrote:
> Yes this is true for the package names, but configuration still resides in
> /etc/slurm-llnl as debian/rules file still says: --sysconfdir=/etc/slurm-llnl
We will move to /etc/slurm-wlm in the future.
Regards
--
G
am-slurm. You need to install it.
Best regards
--
Gennaro Oliva
e]...but I'm not 100% sure about this.
yes, this is the reason and since slurm is not related to llnl anymore
we are moving to slurm-wlm.
Best regards,
--
Gennaro Oliva
CPUs=16 RealMemory=15999 State=UNKNOWN*
NodeName=node02 CPUs=16 RealMemory=3944 State=UNKNOWN*
NodeName=node04 CPUs=16 RealMemory=7984 State=UNKNOWN*
Best regards
--
Gennaro Oliva
your slurm.conf under:
/etc/slurm-llnl
Anyway I suggest to update the operating system to stretch and fix your
configuration under a more recent version of slurm.
Best regards
--
Gennaro Oliva
emctl status munge
If not, try to start it with systemctl start munge and check the error
message.
[1] https://lists.debian.org/debian-user/
Saluti
--
Gennaro Oliva
ou can still save the first SLURM_JOB_ID somehow and make the
following jobs compute their sleep time using:
$((SLURM_JOB_ID - FIRST_SLURM_JOB_ID))
Regards
--
Gennaro Oliva
Hi Renat,
On Fri, Nov 10, 2017 at 09:44:06AM +0100, Yakupov, Renat /DZNE wrote:
> How can I find that out? Dont see anything like that in slurm.conf...
srun -V
Regards,
--
Gennaro Oliva
TASK_ID*5))
where array index ranges from 0 to n-1.
Regards
--
Gennaro Oliva
25 matches
Mail list logo