On 2021-01-10 12:52, David Woodhouse wrote:
I migrated a system to btrfs which was hosting virtual machins with
qemu.

Using it without disabling copy-on-write was a mistake, of course, and
it became horribly fragmented and slow.

So I tried copying it to a new file... but it has actual *errors* too,
which I think are because it was using the 'directsync' caching mode
for block I/O in qemu.

https://bugzilla.redhat.com/show_bug.cgi?id=1204569#c12

I filed https://bugzilla.redhat.com/show_bug.cgi?id=1914433

What I see is that *both* disks of the RAID-1 have data which is
consistent, and does not match the checksum that btrfs expects:

[ 6827.513630] BTRFS warning (device sda3): csum failed root 5 ino 24387997 off 
2935152640 csum 0x81529887 expected csum 0xb0093af0 mirror 1
[ 6827.517448] BTRFS error (device sda3): bdev /dev/sdb3 errs: wr 0, rd 0, 
flush 0, corrupt 8286, gen 0
[ 6827.527281] BTRFS warning (device sda3): csum failed root 5 ino 24387997 off 
2935152640 csum 0x81529887 expected csum 0xb0093af0 mirror 2
[ 6827.530817] BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, 
flush 0, corrupt 9115, gen 0

It looks like an O_DIRECT bug where the data *do* get updated without
updating the checksum. Which is kind of the worst of both worlds, I
suppose, since I also did get the appalling performance of COW and
fragmentation.

With O_DIRECT Btrfs shouldn't do checksum or compression. This is one of the issues with Direct IO for the moment. But it should not cause those dmesg errors. I believe it should show in scrub as no_csum:

# btrfs scrub status -R /mnt/6TB/
UUID:             fe0a1142-51ab-4181-b635-adbf9f4ea6e6
Scrub started:    Sun Nov 22 13:11:20 2020
Status:           finished
Duration:         9:37:39
        data_extents_scrubbed: 164773032
        tree_extents_scrubbed: 1113696
        data_bytes_scrubbed: 10570715316224
        tree_bytes_scrubbed: 18246795264
        read_errors: 0
        csum_errors: 0
        verify_errors: 0
        no_csum: 3120
        csum_discards: 0
        super_errors: 0
        malloc_errors: 0
        uncorrectable_errors: 0
        unverified_errors: 0
        corrected_errors: 0
        last_physical: 5823976701952



In the short term, all I want to do is make a copy of the file, using
the data which are in the disk regardless of the fact that btrfs thinks
the checksum doesn't match. Is there a way I can turn off *checking* of
the checksum for that specific file (or file descriptor?).

Or is the only way to do it with something like FIBMAP, reading the
offending blocks directly from the underlying disk and then writing
them into the appropriate offset in (a copy of) the file? A plan which
is slightly complicated by the fact that of course btrfs doesn't
support FIBMAP.

What's the best way to recover the data?


You can use GNU ddrescue to copy files. It can skip the offending blocks and replace the bad data with zeroes. Not sure how well qemu will handle that though.


I did some tests with qemu to try to avoid O_DIRECT. This worked, and also enabled compression and csums. It worked by emulating nvme with larger than 4KiB block size. I've tried with 8192 and 16384 sizes. Although it works, it may also be really slow. I have not done any benchmarks yet.


# qemu-system-x86_64 -D qemu.log \
-name fedora \
-enable-kvm -machine q35 -device intel-iommu \
-smp cores=4 -m 3072 \
-drive format=raw,file=disk2.img,cache=writeback,aio=io_uring,if=none,id=drv0 \ -device nvme,drive=drv0,serial=1234,physical_block_size=8192,logical_block_size=8192,write-cache=on \
-display vnc=192.168.0.1:0,to=100 \
-net nic,model=virtio,macaddr=00:00:00:00:00:01 -net tap,ifname=qemu0 \

Just a warning about cache=writeback. I have not checked how safe this is with regards to crashes and powerloss.

Reply via email to