On 2020/5/30 6:34, Jaegeuk Kim wrote:
> On 05/29, Chao Yu wrote:
>> Under heavy fsstress, we may triggle panic while issuing discard,
>> because __check_sit_bitmap() detects that discard command may earse
>> valid data blocks, the root cause is as below race stack described,
>> since we removed lock when flushing quota data, quota data writeback
>> may race with write_checkpoint(), so that it causes inconsistency in
>> between cached discard entry and segment bitmap.
>>
>> - f2fs_write_checkpoint
>>  - block_operations
>>   - set_sbi_flag(sbi, SBI_QUOTA_SKIP_FLUSH)
>>  - f2fs_flush_sit_entries
>>   - add_discard_addrs
>>    - __set_bit_le(i, (void *)de->discard_map);
>>                                              - f2fs_write_data_pages
>>                                               - f2fs_write_single_data_page
>>                                                 : inode is quota one, 
>> cp_rwsem won't be locked
>>                                                - f2fs_do_write_data_page
>>                                                 - f2fs_allocate_data_block
>>                                                  - f2fs_wait_discard_bio
>>                                                    : discard entry has not 
>> been added yet.
>>                                                  - update_sit_entry
>>  - f2fs_clear_prefree_segments
>>   - f2fs_issue_discard
>>   : add discard entry
>>
>> This patch fixes this issue by reverting 435cbab95e39 ("f2fs: fix quota_sync
>> failure due to f2fs_lock_op").
>>
>> Fixes: 435cbab95e39 ("f2fs: fix quota_sync failure due to f2fs_lock_op")
> 
> The previous patch fixes quota_sync gets EAGAIN all the time.
> How about this? It seems this works for fsstress test.
> 
> ---
>  fs/f2fs/segment.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index ebbadde6cbced..f67cffc38975e 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -3095,6 +3095,14 @@ void f2fs_allocate_data_block(struct f2fs_sb_info 
> *sbi, struct page *page,
>       struct curseg_info *curseg = CURSEG_I(sbi, type);
>       bool put_pin_sem = false;
>  
> +     /*
> +      * We need to wait for node_write to avoid block allocation during
> +      * checkpoint. This can only happen to quota writes which can cause
> +      * the below discard race condition.
> +      */
> +     if (IS_DATASEG(type))

type is CURSEG_COLD_DATA_PINNED, IS_DATASEG(CURSEG_COLD_DATA_PINNED) should be 
false,
then node_write lock will not be held, later type will be updated to 
CURSEG_COLD_DATA,
then we will try to release node_write.

Thanks,

> +             down_write(&sbi->node_write);
> +
>       if (type == CURSEG_COLD_DATA) {
>               /* GC during CURSEG_COLD_DATA_PINNED allocation */
>               if (down_read_trylock(&sbi->pin_sem)) {
> @@ -3174,6 +3182,9 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, 
> struct page *page,
>  
>       if (put_pin_sem)
>               up_read(&sbi->pin_sem);
> +
> +     if (IS_DATASEG(type))
> +             up_write(&sbi->node_write);
>  }
>  
>  static void update_device_state(struct f2fs_io_info *fio)
> 

Reply via email to