On Apr 15, 2023, at 21:31, Mark Millard <mark...@yahoo.com> wrote:

> On Apr 15, 2023, at 17:27, Mark Millard <mark...@yahoo.com> wrote:
> 
>> On Apr 15, 2023, at 15:49, Mark Millard <mark...@yahoo.com> wrote:
>> 
>>> . . .
>>>> 
>>>> 
>>>> (Mostly written as I progressed but some material later
>>>> inserted into/around previously written material.)
>>>> 
>>>> Summary:
>>>> 
>>>> As stands, it looks like reverting the dnode_is_dirty
>>>> code is what fixes the corruptions that my type of
>>>> test context produced via poudriere bulk activity .
>>>> 
>>>> 
>>>> The details that lead to that summary . . .
>>>> 
>>>> Using my my build environment for updating my temporary,
>>>> experimental context, an environment running a world and
>>>> and kernel that predate the import:
>>>> 
>>>> # uname -apKU
>>>> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #90 
>>>> main-n261544-cee09bda03c8-dirty: Wed Mar 15 20:25:49 PDT 2023     
>>>> root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72
>>>>  arm64 aarch64 1400082 1400082
>>>> 
>>>> (Note the "nondbg": I normally run non-debug main builds,
>>>> but with symbols not stripped.)
>>>> 
>>>> The kernel and world for this are what is in old-main-CA72:
>>>> 
>>>> # bectl list
>>>> BE            Active Mountpoint Space Created
>>>> main-CA72     R      -          3.98G 2023-04-12 20:29
>>>> old-main-CA72 N      /          1.08M 2023-02-06 19:44
>>>> 
>>>> (Most everything else is outside the BE's and so is shared
>>>> across the BE's.)
>>>> 
>>>> I updated to also have (whitespace details likely
>>>> not preserved in this note):
>>>> 
>>>> # git -C /usr/main-src/ diff 
>>>> /usr/main-src/sys/contrib/openzfs/module/zfs/dnode.c
>>>> diff --git a/sys/contrib/openzfs/module/zfs/dnode.c 
>>>> b/sys/contrib/openzfs/module/zfs/dnode.c
>>>> index 367bfaa80726..49a7f59c0da4 100644
>>>> --- a/sys/contrib/openzfs/module/zfs/dnode.c
>>>> +++ b/sys/contrib/openzfs/module/zfs/dnode.c
>>>> @@ -1772,17 +1772,7 @@ dnode_is_dirty(dnode_t *dn)
>>>> {
>>>>     mutex_enter(&dn->dn_mtx);
>>>>     for (int i = 0; i < TXG_SIZE; i++) {
>>>> -               list_t *list = &dn->dn_dirty_records[i];
>>>> -               for (dbuf_dirty_record_t *dr = list_head(list);
>>>> -                   dr != NULL; dr = list_next(list, dr)) {
>>>> -                       if (dr->dr_dbuf == NULL ||
>>>> -                           (dr->dr_dbuf->db_blkid != DMU_BONUS_BLKID &&
>>>> -                           dr->dr_dbuf->db_blkid != DMU_SPILL_BLKID)) {
>>>> -                               mutex_exit(&dn->dn_mtx);
>>>> -                               return (B_TRUE);
>>>> -                       }
>>>> -               }
>>>> -               if (dn->dn_free_ranges[i] != NULL) {
>>>> +               if (multilist_link_active(&dn->dn_dirty_link[i])) {
>>>>                     mutex_exit(&dn->dn_mtx);
>>>>                     return (B_TRUE);
>>>>             }
>>>> 
>>>> 
>>>> 
>>>> 
>>>> I did my usual buildworld buildkernel sequence and then
>>>> one of my normal install sequences into main-CA72 to
>>>> update it to have the change, as well as the prior
>>>> material involved in my first experiment that I'd
>>>> reported on.
>>>> 
>>>> I cleared the content of the jail that I use for
>>>> temporary experiments, such as the prior testing that
>>>> got the 11 builder failures:
>>>> 
>>>> # poudriere pkgclean -jmain-CA72-bulk_a -A
>>>> 
>>>> I then rebooted using the updated main-CA72 BE.
>>>> 
>>>> Then I started the:
>>>> 
>>>> # poudriere bulk -jmain-CA72-bulk_a -w -f ~/origins/CA72-origins.txt
>>>> . . .
>>>> [00:00:37] Building 476 packages using up to 16 builders
>>>> [00:00:37] Hit CTRL+t at any time to see build progress and stats
>>>> [00:00:38] [01] [00:00:00] Builder starting
>>>> [00:00:40] [01] [00:00:02] Builder started
>>>> [00:00:40] [01] [00:00:00] Building ports-mgmt/pkg | pkg-1.19.1_1
>>>> 
>>>> In the prior experiment it got:
>>>> 
>>>> 476 = 252 success + 11 failed + 213 skipped
>>>> 
>>>> and it reported the time for that as: 00:37:52.
>>>> 
>>>> A normal from-scratch build takes many hours (multiple
>>>> compiler toolchains and such) so my first report after
>>>> this point will be for one of:
>>>> 
>>>> A) It got to, say, 00:40:00 or beyond with, or without
>>>> failures.
>>>> vs.
>>>> B) It got failures and stopped before that.
>>>> 
>>>> . . . TIME GOES BY . . .
>>>> 
>>>> At about 00:40:00 the status was:
>>>> 
>>>> [00:40:00] [06] [00:00:00] Building x11/libXv | libXv-1.0.12,1
>>>> load: 30.73  cmd: sh 1508 [nanslp] 2400.88r 6.69u 11.90s 0% 3960k
>>>> [main-CA72-bulk_a-default] [2023-04-15_14h47m19s] [parallel_build:] 
>>>> Queued: 476 Built: 235 Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   
>>>> Tobuild: 241  Time: 00:40:01
>>>> ID  TOTAL                    ORIGIN   PKGNAME                           
>>>> PHASE PHASE    TMPFS       CPU% MEM%
>>>> [15] 00:07:44     devel/py-lxml@py39 | py39-lxml-4.9.2                   
>>>> stage 00:00:08 40.00 KiB     0%   0%
>>>> [01] 00:00:34         x11/libXxf86vm | libXxf86vm-1.1.4_3        
>>>> build-depends 00:00:03 56.00 KiB   2.3%   0%
>>>> [16] 00:01:59     x11-toolkits/libXt | libXt-1.2.1,1                 
>>>> configure 00:00:52 40.00 KiB   0.3%   0%
>>>> [02] 00:01:40             devel/dbus | dbus-1.14.6,1                 
>>>> configure 00:00:05 36.00 KiB   0.5%   0%
>>>> [03] 00:02:20         x11/libXrender | libXrender-0.9.10_2           
>>>> configure 00:01:27 40.00 KiB     0%   0%
>>>> [04] 00:00:44      graphics/libglvnd | libglvnd-1.6.0            
>>>> build-depends 00:00:13 52.00 KiB  20.3% 0.1%
>>>> [05] 00:01:39              x11/xprop | xprop-1.2.6                   
>>>> configure 00:00:45 56.00 KiB   0.7% 0.2%
>>>> [06] 00:00:14              x11/libXv | libXv-1.0.12,1              
>>>> pkg-depends 00:00:03 52.00 KiB   3.6%   0%
>>>> [07] 00:01:57          x11/libXfixes | libXfixes-6.0.0               
>>>> configure 00:00:42 40.00 KiB   0.1%   0%
>>>> [08] 00:03:01           devel/glib20 | glib-2.76.1,2                 
>>>> configure 00:01:26 40.00 KiB   4.3% 0.1%
>>>> [09] 00:01:21 shells/bash-completion | bash-completion-2.11_2,2      
>>>> configure 00:00:13 32.00 KiB   5.7%   0%
>>>> [10] 00:06:26   devel/qt5-buildtools | qt5-buildtools-5.15.8p157       
>>>> package 00:01:57 44.00 KiB  76.1% 0.1%
>>>> [11] 00:01:20          print/psutils | psutils-1.17_5                    
>>>> stage 00:00:03 40.00 KiB   0.2%   0%
>>>> [12] 00:02:09         x11/libxkbfile | libxkbfile-1.1.0              
>>>> configure 00:01:22 44.00 KiB   0.1%   0%
>>>> [13] 00:08:54       devel/cmake-core | cmake-core-3.25.1                 
>>>> build 00:01:43 36.00 KiB 694.9% 5.2%
>>>> [14] 00:01:20     x11/xcb-util-image | xcb-util-image-0.4.0_1        
>>>> configure 00:00:14 48.00 KiB     0%   0%
>>>> [00:40:14] Logs: 
>>>> /usr/local/poudriere/data/logs/bulk/main-CA72-bulk_a-default/2023-04-15_14h47m19s
>>>> [00:40:22] [11] [00:01:28] Finished print/psutils | psutils-1.17_5: Success
>>>> 
>>>> So no failures so far, 235 ports built and a bunch in process.
>>>> 
>>>> Note: maximum observed ("MaxObs") load averages so far
>>>> as shown below (for there being only 16 cores, i.e.,
>>>> 16 Cortex-A72 threads, one per core):
>>>> 
>>>> load averages:  43.75,  34.32,  27.40 MaxObs:  45.03,  34.32,  27.40
>>>> 
>>>> I'll report again if it gets a corruption failure.
>>>> Otherwise it will be many hours before it would
>>>> complete without such failures (multiple compiler
>>>> toolchain builds and such).
>>>> 
>>> 
>>> To be explicit, since I've now seen your commit: My
>>> test does not include the change (whitespace details
>>> not preserved in this note)
>>> 
>>> @@ -2650,9 +2641,7 @@ dnode_next_offset(dnode_t *dn, int flags, uint64_t 
>>> *offset,
>>> rw_enter(&dn->dn_struct_rwlock, RW_READER);
>>> 
>>> if (dn->dn_phys->dn_nlevels == 0) {
>>> - if (!(flags & DNODE_FIND_HOLE)) {
>>> - error = SET_ERROR(ESRCH);
>>> - }
>>> + error = SET_ERROR(ESRCH);
>>> goto out;
>>> }
>>> 
>>> 
>>> I make no claims about which way dnode_next_offset should be.
>>> I was only providing some independent evidence that might
>>> prove of some use to folks that understand the alternative
>>> code sequences.
>>> 
>>> 
>>> For reference, at about the 1 hr point for what I'm testing:
>>> 
>>> [00:57:49] [01] [00:00:51] Finished sysutils/u-boot-tools | 
>>> u-boot-tools-2022.10: Success
>>> [main-CA72-bulk_a-default] [2023-04-15_14h47m19s] [parallel_build:] Queued: 
>>> 476 Built: 306 Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 
>>> 170  Time: 00:59:49
>>> ID  TOTAL                   ORIGIN   PKGNAME             PHASE PHASE    
>>> TMPFS     CPU% MEM%
>>> [06] 00:16:54 devel/binutils@native | binutils-2.40_2,1 package 00:04:22 
>>> 56.00 KiB 100% 0.2%
>>> [09] 00:15:50           lang/ruby31 | ruby-3.1.3_2,1    package 00:00:28 
>>> 40.00 KiB 100% 0.2%
>>> [13] 00:28:43      devel/cmake-core | cmake-core-3.25.1 package 00:14:20 
>>> 36.00 KiB 100% 0.2%
>>> [01:00:03] Logs: 
>>> /usr/local/poudriere/data/logs/bulk/main-CA72-bulk_a-default/2023-04-15_14h47m19s
>>> [01:00:06] [06] [00:16:57] Finished devel/binutils@native | 
>>> binutils-2.40_2,1: Success
>> 
>> Just a status update as of lang/gcc12 finishing up:
>> 
>> [02:18:39] [03] [01:18:30] Finished lang/gcc12 | gcc12-12.2.0_5: Success
>> [02:19:11] [02] [00:12:27] Finished print/harfbuzz-icu | harfbuzz-icu-7.1.0: 
>> Success
>> load: 63.40  cmd: sh 59209 [runnable] 0.03r 0.00u 0.00s 0% 4012k
>> [main-CA72-bulk_a-default] [2023-04-15_14h47m19s] [parallel_build:] Queued: 
>> 476 Built: 404 Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 
>> 72   Time: 02:19:04
>> ID  TOTAL                          ORIGIN   PKGNAME                          
>>          PHASE PHASE    TMPFS       CPU% MEM%
>> [15] 01:15:01                 devel/llvm16 | llvm16-16.0.0_1                 
>>           build 00:55:16 4.00 KiB  428.6% 4.4%
>> [03] 01:19:09                   lang/gcc12 | gcc12-12.2.0_5                  
>> build_port_done 00:00:39 4.00 KiB   27.8%   0%
>> [04] 00:51:27                    lang/rust | rust-1.68.0                     
>>           build 00:41:24 8.00 KiB  480.6% 4.3%
>> [06] 00:12:09 print/ghostscript9-agpl-base | ghostscript9-agpl-base-9.56.1_8 
>>           build 00:07:19 36.00 KiB   0.7%   0%
>> [09] 01:15:01                 devel/llvm15 | llvm15-15.0.7_1                 
>>           build 00:57:14 4.00 KiB  436.2%   5%
>> [02:19:18] Logs: 
>> /usr/local/poudriere/data/logs/bulk/main-CA72-bulk_a-default/2023-04-15_14h47m19s
>> 
>> So 404 packages built and 0 failures to build. 5 builders active
>> at the time.
>> 
>> FYI at about the same time:
>> load averages:  64.31,  66.32,  66.68 MaxObs:  99.41,  83.15,  74.18 
>> 
>> I'll note that I use non-default options to avoid LTO based builds
>> of compiler toolchains and such and this testing is of my normal
>> style of builds, not of FreeBSD default options.
>> 
>> I'll also note that, unlike my normal builds on this machine, this
>> is with USE_TMPFS=data ( instead of USE_TMPFS=all ). This leads to
>> a lot more ZFS I/O, which seemed appropriate for the type of
>> testing context involved. I've one other machine that I normally
>> use USE_TMPFS=all with but the rest normally use USE_TMPFS=data .
>> This is because only the 2 machines have the RAM to make the
>> USE_TMPFS=all reasonable. The machine in use has 64 GiBytes
>> of RAM and "Swap: 241664Mi Total" for the 16 hardware thread
>> context, not that it has ever used much of the swap, except when
>> some process turned into a massive-size file generator. (64-bit
>> offset limited other than my noticing and killing such?)
>> 
> 
> The bjam command used for stage for devel/boost-libs is a
> long running, single process, single threaded context. So
> far over 50 CPU minutes worth for stage.
> 
> [main-CA72-bulk_a-default] [2023-04-15_14h47m19s] [parallel_build:] Queued: 
> 476 Built: 463 Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 13 
>   Time: 06:40:59
> ID  TOTAL              ORIGIN   PKGNAME           PHASE PHASE    TMPFS     
> CPU% MEM%
> [02] 01:06:33 devel/boost-libs | boost-libs-1.81.0 stage 00:55:57 16.00 KiB 
> 100% 0.5%
> [06:41:12] Logs: 
> /usr/local/poudriere/data/logs/bulk/main-CA72-bulk_a-default/2023-04-15_14h47m19s
> 
> But, so far: 463 ports built, 0 failures. It may be
> a while before the last 12 get a chance, as they
> wait for devel/boost-libs to complete.

No failures overall:

[main-CA72-bulk_a-default] [2023-04-15_14h47m19s] [committing:] Queued: 476 
Built: 476 Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 0    
Time: 09:08:52

(Several of the last builds were basically each a sequence
of single threaded single processes that took notable time.)



===
Mark Millard
marklmi at yahoo.com


Reply via email to