2014-02-04, OGAWA Hirofumi <hirof...@mail.parknet.co.jp>:
> Namjae Jeon <linkinj...@gmail.com> writes:
>
>> 2014-02-04, OGAWA Hirofumi <hirof...@mail.parknet.co.jp>:
>>> Namjae Jeon <linkinj...@gmail.com> writes:
>>>
>>>>>>          /* fat_get_cluster() assumes the requested blocknr isn't 
>>>>>> truncated.
>>>>>> */
>>>>>>          down_read(&MSDOS_I(mapping->host)->truncate_lock);
>>>>>> +        /* To get block number beyond file size in fallocated region */
>>>>>> +        atomic_set(&MSDOS_I(mapping->host)->beyond_isize, 1);
>>>>>>          blocknr = generic_block_bmap(mapping, block, fat_get_block);
>>>>>> +        atomic_set(&MSDOS_I(mapping->host)->beyond_isize, 0);
>>>>>>          up_read(&MSDOS_I(mapping->host)->truncate_lock);
>>>>>
>>>>> This is racy. While user is using bmap, kernel can allocate new
>>>>> blocks.
>>>>> We should use another function for this.
>>>> I understand that fat can map fallocated blocks in read case while
>>>> user is using bmap.
>>>> But I can not find the case allocate new blocks.
>>>> If I am missing something, Could you please elaborate more ?
>>>> Is it a case of _bmap request returning the block number for block
>>>> allocated in parallel write path ?
>>>
>>> ->beyond_size is global for inode. So, write(2) path on same inode with
>>> bmap() also can see 1 set by bmap() while another process is using
>>> bmap().
>> 'create' flag  will be 1 in write(2) path. ->beyond_isize will only be
>> checked when 'create' flag is 0. Is there any case to be racy by
>> beyond_isize in write(2) path ?
>
> Ah, so instead of write, it will assign physical address to buffers
> beyond i_size for simple read if race?  In this case, it is still wrong.
Right. I will fix this case.
Thanks for review!
> --
> OGAWA Hirofumi <hirof...@mail.parknet.co.jp>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to