On Apr 16, 2023, at 01:34, Mark Millard <mark...@yahoo.com> wrote:

> On Apr 15, 2023, at 19:13, Mark Millard <mark...@yahoo.com> wrote:
> 
>> A general question is all for this message.
>> 
>> So far no commit to FeeeBSD's main seems to be
>> analogous to the content of:
>> 
>> https://github.com/openzfs/zfs/pull/14739/files
>> 
>> After my existing poudriere bulk test finishes,
>> should I avoid having the content of that change
>> in place for future testing? Vs.: Should I keep
>> using the content of that change?
>> 
>> (The question is prompted by the 2 recent commits
>> that I will update my test environment to be using,
>> in part by fetching and updating to a new head,
>> avoiding the "no dnode_next_offset change" status
>> that my existing test has.)
>> 
> 
> Not knowing, I updated to:
> 
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #92 
> main-n262185-b1a00c2b1368-dirty: Sun Apr 16 00:10:51 PDT 2023     
> root@CA72_4c8G_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72
>  arm64 aarch64 1400086 1400086
> 
> with the following still in place:
> 
> # git -C /usr/main-src/ diff sys/contrib/openzfs/
> diff --git a/sys/contrib/openzfs/module/zfs/dmu.c 
> b/sys/contrib/openzfs/module/zfs/dmu.c
> index ce985d833f58..cda1472a77aa 100644
> --- a/sys/contrib/openzfs/module/zfs/dmu.c
> +++ b/sys/contrib/openzfs/module/zfs/dmu.c
> @@ -2312,8 +2312,10 @@ dmu_brt_clone(objset_t *os, uint64_t object, uint64_t 
> offset, uint64_t length,
>                        dl->dr_overridden_by.blk_phys_birth = 0;
>                } else {
>                        dl->dr_overridden_by.blk_birth = dr->dr_txg;
> -                       dl->dr_overridden_by.blk_phys_birth =
> -                           BP_PHYSICAL_BIRTH(bp);
> +                       if (!BP_IS_EMBEDDED(bp)) {
> +                               dl->dr_overridden_by.blk_phys_birth =
> +                                   BP_PHYSICAL_BIRTH(bp);
> +                       }
>                }
>                  mutex_exit(&db->db_mtx);
> 
> 
> 
> and booted the update. I've done a:
> 
> # poudriere pkgclean -jmain-CA72-bulk_a -A
> 
> and started another package build run based
> on that combination:
> 
> # poudriere bulk -jmain-CA72-bulk_a -w -f ~/origins/CA72-origins.txt
> . . .
> [main-CA72-bulk_a-default] [2023-04-16_00h38m01s] [balancing_pool:] Queued: 
> 476 Built: 0   Failed: 0   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 
> 476  Time: 00:00:24
> [00:00:37] Recording filesystem state for prepkg... done
> [00:00:37] Building 476 packages using up to 16 builders
> [00:00:37] Hit CTRL+t at any time to see build progress and stats
> [00:00:37] [01] [00:00:00] Builder starting
> [00:00:40] [01] [00:00:03] Builder started
> [00:00:40] [01] [00:00:00] Building ports-mgmt/pkg | pkg-1.19.1_1
> . . .
> 
> If there are no failures, it will be about 9 hrs before I know that.
> Given that I'll be trying to sleep soon, it may be about that long
> either way.

[Reminder: All my testing has been of a "block_cloning was
never enabled" context. This one has the dnode_next_offset
change involved, unlike the prior one.]

There was one failed fetch but no other failures:

[01:25:02] [04] [00:01:07] Finished ports-mgmt/fallout | fallout-1.0.4_8: 
Failed: fetch
. . .
[09:13:58] Failed ports: ports-mgmt/fallout:fetch
[main-CA72-bulk_a-default] [2023-04-16_00h38m01s] [committing:] Queued: 476 
Built: 475 Failed: 1   Skipped: 0   Ignored: 0   Fetched: 0   Tobuild: 0    
Time: 09:13:45

Running the bulk again:

. . .
[00:00:22] Building 1 packages using up to 1 builders
[00:00:22] Hit CTRL+t at any time to see build progress and stats
[00:00:22] [01] [00:00:00] Builder starting
[00:00:24] [01] [00:00:02] Builder started
[00:00:24] [01] [00:00:00] Building ports-mgmt/fallout | fallout-1.0.4_8
[00:01:04] [01] [00:00:40] Finished ports-mgmt/fallout | fallout-1.0.4_8: 
Success
. . .

I do not expect the fetch issue is evidence of a problem.

I'm counting this as:  No evidence of corruption problems.

===
Mark Millard
marklmi at yahoo.com


Reply via email to