Recovering from the other copies of the superblock is fundamental
to BTRFS, which provides resilient against single LBA failure in
the DUP group profile.

Further, in the test case [1] it shows a good but stale superblock
at copy#2. This will lead to confusion during auto/manual recovery.
So strictly speaking if a device has three copies of the superblock
and if we have permission to wipe it (-f option) then we have to wipe
all the copies of the superblock.

If there is any objection to writing beyond mkfs.btrfs -b <size>, then
we could fail the mkfs/dev-add/dev-replace operation and ask the user
the wipe using dd command manually, as anyway there is no use of keeping
only the copy#2 when the new FS has been written.

Test case: Note that copy#2 fsid is different from primary and copy#1
fsid.

mkfs.btrfs -qf /dev/mapper/vg-lv && \
mkfs.btrfs -qf -b1G /dev/mapper/vg-lv && \
btrfs in dump-super -a /dev/mapper/vg-lv | grep '.fsid|superblock:'

superblock: bytenr=65536, device=/dev/mapper/vg-lv
dev_item.fsid           ebc67d01-7fc5-43f0-90b4-d1925002551e [match]
superblock: bytenr=67108864, device=/dev/mapper/vg-lv
dev_item.fsid           ebc67d01-7fc5-43f0-90b4-d1925002551e [match]
superblock: bytenr=274877906944, device=/dev/mapper/vg-lv
dev_item.fsid           b97a9206-593b-4933-a424-c6a6ee23fe7c [match]

Signed-off-by: Anand Jain <anand.j...@oracle.com>
---
 Hope with this we can patch the kernel to auto recover from the
 failed primary SB. In the earlier discussion on that, I think we
 are scrutinizing the wrong side (kernel) of the problem.
 Also, we need to fail the mount if all the copies of the SB do
 not have the same fsid.

 utils.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/utils.c b/utils.c
index 00020e1d6bdf..9c027f77d9c1 100644
--- a/utils.c
+++ b/utils.c
@@ -365,6 +365,41 @@ int btrfs_prepare_device(int fd, const char *file, u64 
*block_count_ret,
                return 1;
        }
 
+       /*
+        * Check for the BTRFS SB copies up until btrfs_device_size() and zero
+        * it. So that kernel (or user for the manual recovery) don't have to
+        * confuse with the stale SB copy during recovery.
+        */
+       if (block_count != btrfs_device_size(fd, &st)) {
+               for (i = 1; i < BTRFS_SUPER_MIRROR_MAX; i++) {
+                       struct btrfs_super_block *disk_super;
+                       char buf[BTRFS_SUPER_INFO_SIZE];
+                       disk_super = (struct btrfs_super_block *)buf;
+
+                       /* Already zeroed above */
+                       if (btrfs_sb_offset(i) < block_count)
+                               continue;
+
+                       /* Beyond actual disk size */
+                       if (btrfs_sb_offset(i) >= btrfs_device_size(fd, &st))
+                               continue;
+
+                       /* Does not contain any stale SB */
+                       if (btrfs_read_dev_super(fd, disk_super,
+                                                btrfs_sb_offset(i), 0))
+                               continue;
+
+                       ret = zero_dev_clamped(fd, btrfs_sb_offset(i),
+                                               BTRFS_SUPER_INFO_SIZE,
+                                               btrfs_device_size(fd, &st));
+                       if (ret < 0) {
+                               error("failed to zero device '%s' bytenr %llu: 
%s",
+                                       file, btrfs_sb_offset(i), 
strerror(-ret));
+                               return 1;
+                       }
+               }
+       }
+
        *block_count_ret = block_count;
        return 0;
 }
-- 
2.15.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to