On 23/02/2021 10:13, Johannes Thumshirn wrote:
> On 22/02/2021 21:07, Steven Davies wrote:
>
> [+CC Anand ]
>
>> Booted my system with kernel 5.11.0 vanilla with the first time and received
>> this:
>>
>> BTRFS info (device nvme0n1p2): has skinny extents
>> BTRFS error (device nvme0n1p2): device total_bytes should be at most
>> 964757028864 but found
>> 964770336768
>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>
>> Booting with 5.10.12 has no issues.
>>
>> # btrfs filesystem usage /
>> Overall:
>> Device size: 898.51GiB
>> Device allocated: 620.06GiB
>> Device unallocated: 278.45GiB
>> Device missing: 0.00B
>> Used: 616.58GiB
>> Free (estimated): 279.94GiB (min: 140.72GiB)
>> Data ratio: 1.00
>> Metadata ratio: 2.00
>> Global reserve: 512.00MiB (used: 0.00B)
>>
>> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
>> /dev/nvme0n1p2 568.00GiB
>>
>> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
>> /dev/nvme0n1p2 52.00GiB
>>
>> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
>> /dev/nvme0n1p2 64.00MiB
>>
>> Unallocated:
>> /dev/nvme0n1p2 278.45GiB
>>
>> # parted -l
>> Model: Sabrent Rocket Q (nvme)
>> Disk /dev/nvme0n1: 1000GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>> Disk Flags:
>>
>> Number Start End Size File system Name Flags
>> 1 1049kB 1075MB 1074MB fat32 boot, esp
>> 2 1075MB 966GB 965GB btrfs
>> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>>
>> What has changed in 5.11 which might cause this?
>>
>>
>
> This line:
>> BTRFS info (device nvme0n1p2): has skinny extents
>> BTRFS error (device nvme0n1p2): device total_bytes should be at most
>> 964757028864 but found
>> 964770336768
>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>
> comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in
> verify_one_dev_extent")
> which went into v5.11-rc1.
>
> IIUIC the device item's total_bytes and the block device inode's size are off
> by 12M, so the check
> introduced in the above commit refuses to mount the FS.
>
> Anand any idea?
OK this is getting interesting:
btrfs-porgs sets the device's total_bytes at mkfs time and obtains it from
ioctl(..., BLKGETSIZE64, ...);
BLKGETSIZE64 does:
return put_u64(argp, i_size_read(bdev->bd_inode));
The new check in read_one_dev() does:
u64 max_total_bytes = i_size_read(device->bdev->bd_inode);
if (device->total_bytes > max_total_bytes) {
btrfs_err(fs_info,
"device total_bytes should be at most %llu but found
%llu",
max_total_bytes, device->total_bytes);
return -EINVAL;
So the bdev inode's i_size must have changed between mkfs and mount.
Steven, can you please run:
blockdev --getsize64 /dev/nvme0n1p2