was there a test added for this case to ensure it doesn't resurface later?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/607#issuecomment-384799173
Closed #607 via 16127b627bbb36d736aa7de17859fe8444fc0cce.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/607#event-1597583602
--
openzfs:
pcd1193182 commented on this pull request.
> @@ -2206,6 +2215,9 @@ receive_write(struct receive_writer_arg *rwa, struct
> drr_write *drrw,
rwa->last_object = drrw->drr_object;
rwa->last_offset = drrw->drr_offset;
+ if (rwa->last_object > rwa->max_object)
+
@citrus-it in the past we've seen storage arrays that have zeroed out blocks
behind the scenes leading to ZFS reported checksums. We wanted to have a
default value that would not be confused with that type of corruption. We did
make this a global parameter (`zfs_initialize_value`) so that it
I know it's already integrated but the original description was about `zeroing`
out the unused blocks and the implementation actually writes `0xdeadbeef` which
makes it less useful for me in the case where I want to zero the blocks prior
to doing a hole-punch on a sparse VM disk (and I