For completeness here's the summary of my replacement of all four 6 TB drives
(henceforth "6T") with 8 TB drives ("8T") in a btrfs raid1 volume.
I included transfer rates so maybe others can get a rough idea what to expect
when doing similar things. All capacity units are SI, not base 2.
This issue was found when testing in-band dedupe enospc behaviour,
sometimes run_one_delayed_ref() may fail for enospc reason, then
__btrfs_run_delayed_refs()will return, but forget to add num_heads_read
back, which will trigger "WARN_ON(delayed_refs->num_heads_ready == 0)" in
Not so long ago, I had a disk fail in a btrfs filesystem with raid1
metadata and raid5 data. I mounted the filesystem readonly, replaced
the failing disk, and attempted to recover by adding the new disk and
deleting the missing disk.
It's not going well so far. Pay attention, there are at least
The series is aimed at getting rid of CURRENT_TIME and CURRENT_TIME_SEC macros.
The macros are not y2038 safe. There is no plan to transition them into being
y2038 safe.
ktime_get_* api's can be used in their place. And, these are y2038 safe.
Thanks to Arnd Bergmann for all the guidance and
btrfs_root_item maintains the ctime for root updates.
This is not part of vfs_inode.
Since current_time() uses struct inode* as an argument
as Linus suggested, this cannot be used to update root
times unless, we modify the signature to use inode.
Since btrfs uses nanosecond time granularity, it