I found that RAID5 or RAID6 filesystem might be got corrupted in the following
scenario:
1. Create 4 disks RAID6 filesystem
2. Preallocate 16 10Gb files
3. Run fio: 'fio --name=testload --directory=./ --size=10G --numjobs=16
--bs=64k --iodepth=64 --rw=randrw --verify=sha256 --time_based --runtim
' value. Looks like this is a quite rare condition.
Subsequently, in the raid_rmw_end_io handler that failed bio can be
translated to a wrong stripe number and fail wrong rbio.
Reviewed-by: Johannes Thumshirn
Signed-off-by: Dmitriy Gorokh
---
fs/btrfs/raid56.c | 6 ++
1 file changed, 6 i
On detaching of a disk which is a part of a RAID6 filesystem, the following
kernel OOPS may happen:
[63122.680461] BTRFS error (device sdo): bdev /dev/sdo errs: wr 0, rd 0, flush
1, corrupt 0, gen 0
[63122.719584] BTRFS warning (device sdo): lost page write due to IO error on
/dev/sdo
[63122.