Sun, 29 Jan 2023 13:50:14 +0100 Simon Vogl <[email protected]>:

> failing due to out-of-memory

A slightly related anecdote:

Recently I had to deploy a VM which runs git-daemon.service.
No matter how many GB of memory I assigned to the VM, git-daemon was terminated 
due to OOM.

It turned out, this drop-in for git-daemon.service was required in this 
environment:

[Service]
IOSchedulingClass=idle
CPUSchedulingPolicy=batch
MemoryLow=1000M
MemoryHigh=2000M
MemoryMax=2048M

I think without such resource restrictions, the process never got ENOMEM from 
malloc, and had to assume unlimited resources are available. I think with these 
restrictions, malloc will return an error, and apparently git is smart enough 
to work with whatever amount of memory it is allowed to use.


Under the assumption that gcc is equally smart to deal with ENOMEM, someone has 
to workout what these systemd knobs actually do to the spawned processes. Then 
the same needs to be done in the OBS build script, so that each child process 
of the build script gets proper errors from malloc.

Good luck.

Olaf

Attachment: pgptB150ZTusW.pgp
Description: Digitale Signatur von OpenPGP

_______________________________________________
Packman mailing list
[email protected]
https://lists.links2linux.de/cgi-bin/mailman/listinfo/packman

Antwort per Email an