[Part of my hack's notation does not generalize well across
various USE_TMPFS= alternatives.]

On May 4, 2024, at 09:23, Mark Millard <mark...@yahoo.com> wrote:

> On May 2, 2024, at 13:42, Mark Millard <mark...@yahoo.com> wrote:
> 
>> On Apr 30, 2024, at 19:08, Mark Millard <mark...@yahoo.com> wrote:
>> 
>>> On Apr 28, 2024, at 18:48, Mark Millard <mark...@yahoo.com> wrote:
>>> 
>>>> I've modified my local poudriere-devel to have Success and Failure lines 
>>>> also
>>>> report the tmpfs size at that point. Using, say, script to log the output 
>>>> to
>>>> a file allows later sorting and listing of the TMPFS usage filgures. (The
>>>> context is an amd64 one that has the resources to do a full bulk -a with
>>>> USE_TMPFS=all invovled.) An example (from an in-progress bulk -a that has a
>>>> long way to go):
>>>> 
>>>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -n -r -k11
>>>> 
>>>> [00:36:37] [18] [00:26:53] Finished  print/texlive-docs | 
>>>> texlive-docs-20230313: Success ending TMPFS: 10.67 GiB
>>>> [01:04:41] [07] [00:06:58] Finished  net-mgmt/telegraf | 
>>>> telegraf-1.30.1_1: Success ending TMPFS: 10.52 GiB
>>>> [01:03:32] [25] [00:06:09] Finished  security/trivy | trivy-0.50.1_1: 
>>>> Success ending TMPFS: 10.10 GiB
>>>> . . .
>>>> [01:15:56] [20] [00:00:54] Finished  databases/pg_tileserv | 
>>>> pg_tileserv-1.0.9_12: Failed: build TMPFS: 2.61 GiB
>>>> . . .
>>>> 
>>>> Note that the design is for sort with -k11 to work for Success and
>>>> for Failure. (This is why "ending" is in place for Success.) I
>>>> choose to use poudriere -N (no coloring) for this kind of activity.
>>>> 
>>>> This helps for figuring out what all is appropriate for listing in
>>>> TMPFS_BLACKLIST for a poudriere-devel configuration to avoid tmpfs
>>>> competing too much for RAM+SWAP. (But approraite free file system
>>>> space is needed.)
>>>> 
>>>> 
>>>> "<" below is what is new, ">" is what was original, in
>>>> /usr/local/share/poudriere/common.sh :
>>>> 
>>>> 5928,5934d5927
>>>> < tmpfs_at_end="$(env BLOCKSIZE=512 df -t tmpfs \
>>>> < ${MASTERMNTROOT}/${MY_JOBID}/ \
>>>> < ${MASTERMNTROOT}/${MY_JOBID}/.p/ \
>>>> < ${MASTERMNTROOT}/${MY_JOBID}/usr/local/ \
>>>> < 2>/dev/null | tail -3 \
>>>> < | awk '{ tmpfs_use += $3; } END { printf "%s %.2f %s", "TMPFS:", 
>>>> tmpfs_use*512/(1024**3), "GiB" }')"

Turns out that the above assignment fails for the likes
of USE_TMPFS=data instead of USE_TMPFS=all . The builder
stops at that line and never executes the next line (such
as when a job_msg line had been added).

Inline replacements of the two ${tmpfs_at_end} instances
seems to work okay for such. Something is special about
the use of the assignment.

>>>> 5942c5935
>>>> <    "Success${COLOR_RESET} ending ${tmpfs_at_end}"
>>>> ---
>>>>> "Success"
>>>> 5968c5961
>>>> <    "Failed: ${COLOR_PHASE}${failed_phase}${COLOR_RESET} ${tmpfs_at_end}"
>>>> ---
>>>>> "Failed: ${COLOR_PHASE}${failed_phase}"
>>>> 
>>>> 
>>>> The form of use that I've done also involves (over?) use of
>>>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES . It is not as good of data
>>>> for this other use, but the same .txt file can be processed
>>>> with:
>>>> 
>>>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -r -k3 | more
>>>> [01:42:09] [04] [00:48:16] Finished  lang/erlang-runtime21 | 
>>>> erlang-runtime21-21.3.8.24_3: Success ending TMPFS: 1.92 GiB
>>>> [01:38:39] [28] [00:44:41] Finished  lang/erlang-runtime22 | 
>>>> erlang-runtime22-22.3.4.27: Success ending TMPFS: 1.92 GiB
>>>> [01:05:41] [02] [00:34:54] Finished  lang/erlang-runtime26 | 
>>>> erlang-runtime26-26.2.4: Success ending TMPFS: 2.02 GiB
>>>> . . .
>>>> 
>>>> to find longer running package builds. This is subject to
>>>> significant variation based on what other builders are
>>>> running in parallel at the time and what sort of load
>>>> averages are involved over period in question. The
>>>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES that I've used will
>>>> limit that to some extent. But the result is comparisons
>>>> of some builds that have no activity in parallel by other
>>>> builders vs. builds that have extensive parallel activity
>>>> by other builders (of not-huge packages).
>>>> 
>>>> Note: In modern times the [1D: notation and the like for
>>>> what the -k3 compares are not well placed in the overall
>>>> list compared to the likes of, say, [20: . The day vs.
>>>> hour comparison is not a straight forward thing to sort
>>>> on.
>>>> 
>>>> Hopefully in a few days I'll be able to list off example
>>>> top tmpfs usage for USE_TMPFS=all and top build times as
>>>> well (such as they are).
>>>> 
>>>> Note:
>>>> This is from my personal environment. I've not tried to
>>>> simulate how FreeBSD's official package builders are set
>>>> up to operate.
>>> 
>>> I've decided that the "build times" list is not reasonable
>>> for use: too much variability of context.
>>> 
>>> But here is a list of 148 or so of the bigger USE_TMPFS=all
>>> tmpfs-usage builds from a poudriere-devel "-N bulk -c -a"
>>> run. It covers the > 7.66 GiByte tmpfs examples for how I
>>> build.
>>> 
>>> # grep TMPFS: mmjnk-bulk-a-output.txt | sort -n -r -k11 | head -84
>>> [1D:10:04:32] [25] [02:14:20] Finished  www/chromium | 
>>> chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB
>> . . .
>>> [1D:11:05:43] [04] [00:16:26] Finished  astro/kstars | kstars-3.6.6_1,1: 
>>> Success ending TMPFS: 7.69 GiB
>>> . . .
>> 
>> Note: I had not put ttk into the TMPFS_BLACKLIST as I had intended.
>> 
>>> For reference: Queued: 34535 Built: 33990 Failed: 159   Skipped: 100   
>>> Ignored: 286   Fetched: 0
>>> 
>>> After rebooting, I'm going to re-run a "-N bulk -c -a" based on
>>> TMPFS_BLACKLIST having a hopefully good approximation to the
>>> above list matching and TMPFS_BLACKLIST_TMPDIR also being
>>> defined. Also, use of ALLOW_MAKE_JOBS=yes but no use of
>>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES :
>>> RAM+SWAP: 192 GiBytes + 512 GiBytes == 704 GiBytes.
>>> Media:    PCIe Optane 1.4T has the file system and the swap.
>>> System:   7950X3D based.
>> 
>> For the 32 hardware thread, 32 builder context, with USE_TMPFS=all
>> --but also using the TMPFS_BLACKLIST-- the result was:
>> 
>> [1D:16:49:31] Stopping 32 builders
>> . . .
>> [main-amd64-bulk_a-default] [2024-05-01_02h27m06s] [committing] Queued: 
>> 34535 Built: 33985 Failed: 162   Skipped: 101   Ignored: 287   Fetched: 0    
>>  Tobuild: 0      Time: 1D:16:50:42
>> 
>> So: a little under 41 hrs for a "-n bulk -c -a".
>> 
>> My adjusted version of top that keeps track of and reports some
>> Maximum Observed figures (MaxObsYYY naming) reported Swap having
>> 50674Mi MaxObsUsed. So SystemRAM+MaxObsSwapUsed of slightly
>> under 241.5 GiBytyes.
>> 
>> So the system SystemRAM+SystemSWAP being 704 GiBytes is a little
>> under 3 times the example observed usage, giving margin for
>> larger needs over time.
>> 
>> Note the below build finish times and durations and that indicate
>> the overlapping time frames for the parallel builds for:
>> 
>> [1D:08:08:33] [15] [05:25:13] Finished  devel/electron27 | 
>> electron27-27.3.11: Success ending TMPFS: 5.15 GiB
>> [1D:08:24:30] [12] [05:37:16] Finished  devel/electron29 | 
>> electron29-29.3.1: Success ending TMPFS: 5.21 GiB
>> [1D:08:49:26] [04] [05:42:10] Finished  devel/electron25 | 
>> electron25-25.9.8_3: Success ending TMPFS: 4.91 GiB
>> [1D:11:53:32] [16] [07:50:11] Finished  www/ungoogled-chromium | 
>> ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 8.08 GiB
>> [1D:11:54:22] [20] [07:51:01] Finished  www/iridium | 
>> iridium-browser-2024.01.120_1: Success ending TMPFS: 6.65 GiB
>> [1D:11:57:03] [07] [07:53:42] Finished  www/chromium | 
>> chromium-124.0.6367.60: Success ending TMPFS: 8.06 GiB
>> [1D:12:01:01] [05] [06:49:56] Finished  devel/electron28 | 
>> electron28-28.3.1: Success ending TMPFS: 5.13 GiB
>> [1D:13:01:04] [10] [07:23:00] Finished  lang/dotnet | dotnet-8.0.0: Success 
>> ending TMPFS: 4.65 GiB
>> 
>> But I'll note that all 32 builders were active over those periods
>> as well, all using ALLOW_MAKE_JOBS=yes without any use of the likes
>> of MAKE_JOBS_NUMBER_LIMIT. It is a high load average style of
>> building that basically almost always has work pending that can be
>> immediately scheduled on any hardware thread that is no longer doing
>> other work. Still, vastly less than 30hr+ for the 2 *chromium, 25hr+
>> for iridium, and 20hr+ for electron25. ( 27..29 are blacklisted on
>> the build server for main-amd64 and there is no 26).
>> 
>> Reminder: prior experimentation had already fetched everything,
>> thus the "Fetched: 0". Fetching would add to the elapsed time.
>> 
>> I make no claim that build servers should be pushed that hard on
>> any sort of systained basis --but the above does indicate what a
>> 7950X3D class 16-core/32-hwthread system can do for "bulk -c -a"
>> activity: somewhat under 2 days for such a "bulk -c -a" to build
>> 33985 packages (plus 162 failures). (The PCIe 1.4T Optane media
>> helps as well.)
>> 
>> Based on the TMPFS_BLACKLIST that I used, the resulting largest
>> tmpfs usage examples were (ttk had been omitted from TMPFS_BLACKLIST):
>> 
>> [1D:13:21:23] [12] [00:14:48] Finished  science/dynare | dynare-5.4_7: 
>> Success ending TMPFS: 10.74 GiB
>> [1D:12:55:42] [07] [00:40:11] Finished  math/octave | octave-9.1.0: Success 
>> ending TMPFS: 10.27 GiB
>> [1D:16:34:13] [25] [00:05:04] Finished  x11/kde5 | kde5-5.27.11.23.08.5_2: 
>> Success ending TMPFS: 9.95 GiB
>> [1D:15:29:52] [12] [00:09:15] Finished  science/ttk | ttk-1.2.0_1: Success 
>> ending TMPFS: 9.59 GiB
>> [1D:16:30:49] [08] [00:02:04] Finished  biology/biostar-tools | 
>> biostar-tools-1.10.1_3: Success ending TMPFS: 9.56 GiB
>> [1D:16:01:32] [32] [00:25:56] Finished  science/code_saturne | 
>> code_saturne-8.1.1: Success ending TMPFS: 9.50 GiB
>> [1D:11:16:08] [15] [00:48:29] Finished  emulators/virtualbox-ose | 
>> virtualbox-ose-6.1.50_1: Success ending TMPFS: 9.48 GiB
>> [1D:13:41:14] [32] [00:19:20] Finished  deskutils/nextcloudclient | 
>> nextcloudclient-3.12.3: Success ending TMPFS: 9.21 GiB
>> [1D:12:14:10] [24] [00:16:20] Finished  print/tex-xetex | 
>> tex-xetex-0.99993_5: Success ending TMPFS: 9.15 GiB
>> [1D:15:20:33] [18] [01:14:02] Finished  science/paraview | 
>> paraview-5.12.0_2: Success ending TMPFS: 9.14 GiB
>> [1D:11:55:04] [32] [00:06:42] Finished  print/tex-dvipdfmx | 
>> tex-dvipdfmx-20230313_1: Success ending TMPFS: 9.14 GiB
>> [1D:14:04:21] [20] [00:01:13] Finished  print/texlive-full | 
>> texlive-full-20230313: Success ending TMPFS: 9.10 GiB
>> [1D:13:55:09] [20] [00:05:39] Finished  science/libghemical | 
>> libghemical-3.0.0_20: Success ending TMPFS: 8.98 GiB
>> [1D:11:21:28] [02] [00:53:57] Finished  print/tex-luatex | 
>> tex-luatex-1.16.0_1: Success ending TMPFS: 8.92 GiB
>> [1D:10:21:01] [31] [00:45:03] Finished  emulators/virtualbox-ose-legacy | 
>> virtualbox-ose-legacy-5.2.44_17: Success ending TMPFS: 8.90 GiB
>> [1D:14:26:10] [17] [01:13:36] Finished  graphics/qgis-ltr | qgis-ltr-3.34.6: 
>> Success ending TMPFS: 8.87 GiB
>> [1D:14:20:59] [06] [01:08:24] Finished  graphics/qgis | qgis-3.36.2: Success 
>> ending TMPFS: 8.87 GiB
>> [1D:16:13:36] [02] [00:41:13] Finished  math/octave-forge | 
>> octave-forge-20240403: Success ending TMPFS: 8.56 GiB
>> . . .
>> 
>> 
>> 
>> An additional note for *chromium/iridium/electron* (and dotnet):
>> 
>> I also previously had done a "bulk -n -a" with those and
>> more listed in MUTUALLY_EXCLUSIVE_BUILD_PACKAGES and no
>> TMPFS_BLACKLIST use (but ALLOW_MAKE_JOBS=yes in use):
>> 
>> [1D:01:50:13] [03] [00:33:53] Finished  lang/dotnet | dotnet-8.0.0: Success 
>> ending TMPFS: 21.23 GiB
>> [1D:10:04:32] [25] [02:14:20] Finished  www/chromium | 
>> chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB
>> [1D:12:02:49] [27] [01:57:58] Finished  devel/electron25 | 
>> electron25-25.9.8_3: Success ending TMPFS: 20.55 GiB
>> [1D:13:51:47] [11] [01:48:15] Finished  devel/electron27 | 
>> electron27-27.3.11: Success ending TMPFS: 22.51 GiB
>> [1D:14:53:51] [01] [00:57:16] Finished  devel/electron28 | 
>> electron28-28.3.1: Success ending TMPFS: 22.66 GiB
>> [1D:15:51:46] [02] [00:57:52] Finished  devel/electron29 | 
>> electron29-29.3.1: Success ending TMPFS: 22.49 GiB
>> [1D:17:19:08] [01] [01:14:40] Finished  www/iridium | 
>> iridium-browser-2024.01.120_1: Success ending TMPFS: 25.66 GiB
>> [1D:22:36:10] [02] [01:09:31] Finished  www/ungoogled-chromium | 
>> ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 21.43 GiB
> 
> For comparison/contrast: I tried it in my UFS based boot media on
> the same system . . .
> 
> [I later give notes about graphics/pinta getting stuck, much like
> print/miktex had if the ZFS experiment.]
> 
> For the 32 hardware thread, 32 builder context, with USE_TMPFS=all
> --but also using the TMPFS_BLACKLIST-- the result was:
> 
> [1D:16:42:48] Stopping 32 builders
> . . .
> [main-amd64-bulk_a-default] [2024-05-02_23h01m58s] [committing] Queued: 34535 
> Built: 33990 Failed: 157   Skipped: 101   Ignored: 287   Fetched: 0     
> Tobuild: 0      Time: 1D:16:43:59
> 
> ZFS had been: Time: 1D:16:50:42
> 
> My adjusted version of top that keeps track of and reports some
> Maximum Observed figures (MaxObsYYY naming) reported Swap having
> 45241Mi MaxObsUsed. So SystemRAM+MaxObsSwapUsed of slightly
> under 236.2 GiBytyes. (The ZFS example had been: 241.5 GiBytyes.)
> 
> So the system SystemRAM+SystemSWAP being 704 GiBytes was a little
> under 3 times the example observed usage in both cases, giving
> margin for larger needs over time.
> 
> 
> 
> As for examples of builders getting stuck . . .
> 
> graphics/pinta in the UFS experiment had gotten stuck in threads
> of /usr/local/bin/mono (mono-sgen):
> 
> [05] 15:31:47                     graphics/pinta | pinta-1.7.1_4              
>                         stage 15:28:31 2.30 GiB   0%   0%
> 
> # procstat -k -k 93415
>  PID    TID COMM                TDNAME              KSTACK                    
>    
> 93415 671706 mono-sgen           -                   mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_lock_umutex+0x6a6 __umtx_op_wait_umutex+0x49 sys__umtx_op+0x7e 
> amd64_syscall+0x115 fast_syscall_common+0xf8 
> 93415 678651 mono-sgen           SGen worker         mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_wait+0x244 __umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e 
> amd64_syscall+0x115 fast_syscall_common+0xf8 
> 93415 678652 mono-sgen           Finalizer           mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> __umtx_op_sem2_wait+0x49a sys__umtx_op+0x7e amd64_syscall+0x115 
> fast_syscall_common+0xf8 
> 93415 678655 mono-sgen           -                   mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_wait+0x244 __umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e 
> amd64_syscall+0x115 fast_syscall_common+0xf8 
> 93415 678660 mono-sgen           Thread Pool Wor     mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_lock_umutex+0x6a6 __umtx_op_wait_umutex+0x49 sys__umtx_op+0x7e 
> amd64_syscall+0x115 fast_syscall_common+0xf8
> 
> So I did a kill -9 93415 to let the bulk run complete.
> 
> I then removed my ADDITION of BROKEN to print/miktex that had gotten
> stuck in the ZFS experiment and tried in the now tiny load average
> context: bulk print/miktex graphics/pinta
> 
> They both worked just fine, not getting stuck (UFS context):
> 
> [00:00:50] [02] [00:00:25] Finished  graphics/pinta | pinta-1.7.1_4: Success 
> ending TMPFS: 2.30 GiB
> [00:14:11] [01] [00:13:47] Finished  print/miktex | miktex-23.9_3: Success 
> ending TMPFS: 3.21 GiB
> 
> I'll note that the procstat for the stuck print/miketex in the
> ZFS context had looked like:
> 
> # procstat -k -k 70121
> PID    TID COMM                TDNAME              KSTACK                     
>   
> 70121 409420 miktex-ctangle      -                   mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_wait+0x244 __umtx_op_wait+0x53 sys__umtx_op+0x7e amd64_syscall+0x115 
> fast_syscall_common+0xf8 
> 70121 646547 miktex-ctangle      -                   mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae kqueue_scan+0x9f1 
> kqueue_kevent+0x13b kern_kevent_fp+0x4b kern_kevent_generic+0xd6 
> sys_kevent+0x61 amd64_syscall+0x115 fast_syscall_common+0xf8 
> 70121 646548 miktex-ctangle      -                   mi_switch+0xba 
> sleepq_catch_signals+0x2c6 sleepq_wait_sig+0x9 _sleep+0x1ae umtxq_sleep+0x2cd 
> do_wait+0x244 __umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e 
> amd64_syscall+0x115 fast_syscall_common+0xf8
> 
> It looks like there is some form failing race condition that can
> occur --and does rarely occur in high load average contexts.


===
Mark Millard
marklmi at yahoo.com


Reply via email to