Hi Erik,

Your user ID has hit the limit for "max user processes" on the machine.
Note that processes and threads are the same in Linux, and a single JVM may
spawn many threads (for example many GC threads :)  This parameter used to
be ulimited for users, but there was a change in Red Hat distros to limit
users to 1024 or so a few years ago. The system admin will have to raise
the limit for users. On redhat the configuration needs to be in
/etc/security/limits.d/90-nproc.conf for RHEL v7.x

Eddie


On Wed, Jul 25, 2018 at 9:23 AM, Erik Fäßler <erik.faess...@uni-jena.de>
wrote:

> Hi all,
>
> is there a way to tell DUCC how much resources of a node it might allocate
> to jobs? My issue is that when I let DUCC scale out my jobs with an
> appropriate memory definition via process_memory_size, I get a lot of “Init
> Fails” where each failed process log shows
>
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Cannot create GC thread. Out of system resources.
>
>
>
>
> If I raise the memory requirement per job to like 40GB (which they never
> require), this issue does not come up because only few processes get
> startet but most CPU cores are not required, then, wasting time.
>
> I can’t use the machines exclusively for DUCC, so can I tell DUCC somehow
> how many resources (CPUs, memory) it may allocate?
>
> Thanks,
>
> Erik

Reply via email to