Hi, On Mon, 2023-01-30 at 09:38 -0600, Malcolm wrote: > On Mon 30 Jan 2023 04:13:39 PM CST, Simon Vogl wrote: > : > > so maybe giving the workers a bit more than 1GB memory is still worth > > trying as a work-around? > > > <snip> > Hi > Checking buildwk1:1 through 4 and buildwk3:1 through 4 with `osc > workerinfo x86_64:buildwk1:1` shows the workers are using jobs=1, > memory=512M and swap=512.
Sorry for that. Both machines have again 4 workers each, now with 5744M and 7509M per worker, respectively. What happened? Short answer: the obs-* packages from the stable release got updated, I rebooted the machines for the kernel update, and I missed to check for the correct amount of RAM in the worker VMs. Long version: OBS is rather lazy and dumb in automatically adjusting to the available hardware, it just takes exactly half of the available RAM and splits it up among the number of workers, which is taken from the number of CPU cores available, or defaulting to 512M. Apart from that one can define the amount of RAM for the worker instances using the config variable OBS_INSTANCE_MEMORY. If you have a considerable amount of RAM using just half of it is wasting resources. I modified obsstoragesetup from obs-common to take all available RAM, minus some small amount to give the OS place to breathe, and split the then available amount of RAM evenly between the worker instances. My patch does even more for NUMA machines, where the physical memory is divided between CPUs. Here the smallest NUMA block is considered and the available RAM is evaluated for the whole for all NUMA blocks combined. Unfortunately I seem to be the only one feeling the limitation of the original approach, and everyone else is fine with using just holf of the available RAM or is fiddling with individual manual RAM assignment. My PR has not seen any action (https://github.com/openSUSE/open-build-service/pull/10016), and each time the obs packages are updated, my patch is overwritten. Of course I could compile the packages myself to get out of that loop Monitoring could be another approach, but I am not even coming around to properly monitor the running schedulers. Greetings, Stefan -- Stefan Botter zu Hause Bremen
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Packman mailing list Packman@links2linux.de https://lists.links2linux.de/cgi-bin/mailman/listinfo/packman