On 2019/06/28 12:56, Anand Jain wrote:
> On 7/6/19 9:10 PM, Naohiro Aota wrote:
>> When in HMZONED mode, make sure that device super blocks are located in
>> randomly writable zones of zoned block devices. That is, do not write super
>> blocks in sequential write required zones of host-managed zoned block
>> devices as update would not be possible.
> 
>    By design all copies of SB must be updated at each transaction,
>    as they are redundant copies they must match at the end of
>    each transaction.
> 
>    Instead of skipping the sb updates, why not alter number of
>    copies at the time of mkfs.btrfs?
> 
> Thanks, Anand

That is exactly what the patched code does. It updates all the SB
copies, but it just avoids writing a copy to sequential writing
required zones. Mkfs.btrfs do the same. So, all the available SB
copies always match after a transaction. At the SB location in a
sequential write required zone, you will see zeroed region (in the
next version of the patch series), but that is easy to ignore: it
lacks even BTRFS_MAGIC.

The number of SB copy available on HMZONED device will vary
by its zone size and its zone layout.

Thanks,

> 
>> Signed-off-by: Damien Le Moal <damien.lem...@wdc.com>
>> Signed-off-by: Naohiro Aota <naohiro.a...@wdc.com>
>> ---
>>    fs/btrfs/disk-io.c     | 11 +++++++++++
>>    fs/btrfs/disk-io.h     |  1 +
>>    fs/btrfs/extent-tree.c |  4 ++++
>>    fs/btrfs/scrub.c       |  2 ++
>>    4 files changed, 18 insertions(+)
>>
>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>> index 7c1404c76768..ddbb02906042 100644
>> --- a/fs/btrfs/disk-io.c
>> +++ b/fs/btrfs/disk-io.c
>> @@ -3466,6 +3466,13 @@ struct buffer_head *btrfs_read_dev_super(struct 
>> block_device *bdev)
>>      return latest;
>>    }
>>    
>> +int btrfs_check_super_location(struct btrfs_device *device, u64 pos)
>> +{
>> +    /* any address is good on a regular (zone_size == 0) device */
>> +    /* non-SEQUENTIAL WRITE REQUIRED zones are capable on a zoned device */
>> +    return device->zone_size == 0 || !btrfs_dev_is_sequential(device, pos);
>> +}
>> +
>>    /*
>>     * Write superblock @sb to the @device. Do not wait for completion, all 
>> the
>>     * buffer heads we write are pinned.
>> @@ -3495,6 +3502,8 @@ static int write_dev_supers(struct btrfs_device 
>> *device,
>>              if (bytenr + BTRFS_SUPER_INFO_SIZE >=
>>                  device->commit_total_bytes)
>>                      break;
>> +            if (!btrfs_check_super_location(device, bytenr))
>> +                    continue;
>>    
>>              btrfs_set_super_bytenr(sb, bytenr);
>>    
>> @@ -3561,6 +3570,8 @@ static int wait_dev_supers(struct btrfs_device 
>> *device, int max_mirrors)
>>              if (bytenr + BTRFS_SUPER_INFO_SIZE >=
>>                  device->commit_total_bytes)
>>                      break;
>> +            if (!btrfs_check_super_location(device, bytenr))
>> +                    continue;
>>    
>>              bh = __find_get_block(device->bdev,
>>                                    bytenr / BTRFS_BDEV_BLOCKSIZE,
>> diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
>> index a0161aa1ea0b..70e97cd6fa76 100644
>> --- a/fs/btrfs/disk-io.h
>> +++ b/fs/btrfs/disk-io.h
>> @@ -141,6 +141,7 @@ struct extent_map *btree_get_extent(struct btrfs_inode 
>> *inode,
>>              struct page *page, size_t pg_offset, u64 start, u64 len,
>>              int create);
>>    int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags);
>> +int btrfs_check_super_location(struct btrfs_device *device, u64 pos);
>>    int __init btrfs_end_io_wq_init(void);
>>    void __cold btrfs_end_io_wq_exit(void);
>>    
>> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
>> index 3d41d840fe5c..ae2c895d08c4 100644
>> --- a/fs/btrfs/extent-tree.c
>> +++ b/fs/btrfs/extent-tree.c
>> @@ -267,6 +267,10 @@ static int exclude_super_stripes(struct 
>> btrfs_block_group_cache *cache)
>>                      return ret;
>>      }
>>    
>> +    /* we won't have super stripes in sequential zones */
>> +    if (cache->alloc_type == BTRFS_ALLOC_SEQ)
>> +            return 0;
>> +
>>      for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
>>              bytenr = btrfs_sb_offset(i);
>>              ret = btrfs_rmap_block(fs_info, cache->key.objectid,
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index f7b29f9db5e2..36ad4fad7eaf 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3720,6 +3720,8 @@ static noinline_for_stack int scrub_supers(struct 
>> scrub_ctx *sctx,
>>              if (bytenr + BTRFS_SUPER_INFO_SIZE >
>>                  scrub_dev->commit_total_bytes)
>>                      break;
>> +            if (!btrfs_check_super_location(scrub_dev, bytenr))
>> +                    continue;
>>    
>>              ret = scrub_pages(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr,
>>                                scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i,
>>
> 
> 

Reply via email to