On Oct 18, 2025, at 10:43, Mark Millard <[email protected]> wrote:

> void <void_at_f-m.fm> wrote on
> Date: Sat, 18 Oct 2025 12:43:07 UTC :
> 
>> On Fri, Oct 17, 2025 at 09:21:06PM -0700, Mark Millard wrote:
>>> 
>>> At this point stable/15 and a non-debug main 16 are not all that
>>> different. So I attempted builds of what I had for a ports tree
>>> (from Oct 13) and then updating the ports tree and rebuilding
>>> what changed ( PKG_NO_VERSION_FOR_DEPS=yes style ) based on my
>>> normal environment and poudriere-devel use.
>>> 
>>> Neither failed.
>>> 
>>> But you give little configuration information so I do not
>>> know how well my attempt approximated your context:
>> 
>> Point taken but at that stage I only wanted to know if others 
>> could build it, because I couldnt on multiple poudrieres and
>> it had/has not yet (2025.10.18-1224 UTC) been built on the pkg cluster.
>> Now that I know it can be built, I partially know where to look, and
>> I avoid making a PR for the port.
> 
> Do you use anything like:
> 
> # Delay when persistent low free RAM leads to
> # Out Of Memory killing of processes:
> vm.pageout_oom_seq=120
> 
> Or:
> 
> #
> # For plunty of swap/paging space (will not
> # run out), avoid pageout delays leading to
> # Out Of Memory killing of processes:
> #vm.pfault_oom_attempts=-1
> #
> # For possibly insufficient swap/paging space
> # (might run out), increase the pageout delay
> # that leads to Out Of Memory killing of
> # processes (showing defaults at the time):
> #vm.pfault_oom_attempts= 3
> #vm.pfault_oom_wait= 10
> 
> (Mine are in /boot/loader.conf .)
> 
>>> RAM+SWAP == ??? + ??? == ???
>> 128+4 == 132GB
> 
> Note that with USE_TMPFS=all but TMPFS_BLACKLIST extensively
> used to avoid tmpfs use for port-packakge with huge file
> system requirements, I reported for the initial build:
> 
> QUOTE
> So: Somewhere between 132624 MiBytes and 143875 MiBytes or
>    so was sufficient RAM+SWAP, all RAM here.
> END QUOTE
> 
> But that was for 32 FreeBSD cpus, not 20. Still, the file
> system usage contribution to RAM+SWAP usage for when tmpfs
> is in full use tends to not be all that dependent on the
> FreeBSD cpu count.
> 
> Converting my figures to GiBytes:
> 
> 132624 MiBytes is a little under 129.6 GiBytes
> 143875 MiBytes is a little under 140.6 GiBytes
> 
> The range is that wide based, in part, on
> lack of significant memory pressure, given the
> 192 GiByes of RAM. When SWAP is is significantly
> involved gives much better information about
> RAM+SWAP requirements because of the memory
> pressure consequences. So I'd not infer that
> much from the above.
> 
> I can boot the system using hw.physmem="128G"
> in /boot/loader.conf. I'll probably get a SWAP
> binding warning about 512 GiBytes of SWAP
> being a potential mistuning for that amount of
> RAM. (More like 474 GiBytes of SWAP would likely
> not complain for 128 GiBytes of RAM.)
> 
> I can disable my TMPFS_BLACKLIST list.
> 
> I can constrain to use of PARALLEL_JOBS=20 and
> have MAKE_JOBS_NUMBER_LIMIT=20 for
> ALLOW_MAKE_JOBS use. But attempting to have it
> actually avoid 12 of the 32 FreeBSD cpus would
> probably be messier and I've no experience with
> any known-effective way of doing that for bulk
> runs. So I may well not deal that issue and
> just let it use up to the 32. This makes
> judging load average implications dependent
> on the 32.
> 
> Also, this build would not have prior builds
> of some of the port-packages. (Nothing would
> end up with "inspected" status.)
> 
> So I may later have better information for
> comparison, including for RAM+SWAP use.
> 
>> The problem happened on two systems. For simplicity I'm talking about the
>> beefier system. It has 20 cpu (40 with HT on but it is turned off)
>> and 128GB ram. configured swap is 4GB and hardly used.
>> 
>>> poudriere.conf :
>>> USE_TMPFS=???
>> all
>> 
>>> TMPFS_BLACKLIST=???
>> not defined
>> 
>>> PARALLEL_JOBS=??? # (or command line control of such)
>> 1 in poudriere.conf at the moment. It usually has 3 as per pkg.f.o 
>> but -J20 was also tried directly on the command line
>> 
>>> ALLOW_MAKE_JOBS=??? # (defined vs. not)
>> yes
>> 
>>> ALLOW_MAKE_JOBS_PACKAGES=???
>> undefined
>> 
>>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES=???
>> "llvm* rust* gcc*"
>> 
>>> PRIORITY_BOOST=???
>> undefined
>> 
>>> other relevant possibilities?
>>> 
>>> make.conf (or command line control of such):
>>> MAKE_JOBS_NUMBER_LIMIT=??? # or MAKE_JOBS_NUMBER=???
>> not defined within jailname-make.conf
>> 
>>> Details for my context . . .
>> 
>> Thank you for your time in this. I'm interested - do you
>> make available your hacked version of top? Could be useful!
> 
> I'll deal with top separately. I've not been doing
> source based activities for months and likely
> should get my context for such up to date first.

I forgot to ask about the non-tmpfs file system(s):
ZFS? UFS? Any tuning of note?

My prior tests I reported on were done in a ZFS
context, although just on a single partition: it
is ZFS just to have bectl use as far as why goes,
not for redundancy or other typical ZFS usage. The
only tuning is:

/etc/sysctl.conf:vfs.zfs.vdev.min_auto_ashift=12
/etc/sysctl.conf:vfs.zfs.per_txg_dirty_frees_percent=5

The use of "5" instead of "30" was as recommended
by the person that changed the default to 30. It was
for some behavior that I reported for a specific
context, but the 5 seemed to not be a problem for me
for any context I had so I've used it systematically
since then. 5 was the prior default, as I remember.

===
Mark Millard
marklmi at yahoo.com


Reply via email to