On 2021/4/10 1:17, Tim Chen wrote:
> 
> 
> On 4/9/21 1:42 AM, Miaohe Lin wrote:
>> On 2021/4/9 5:34, Tim Chen wrote:
>>>
>>>
>>> On 4/8/21 6:08 AM, Miaohe Lin wrote:
>>>> When I was investigating the swap code, I found the below possible race
>>>> window:
>>>>
>>>> CPU 1                                      CPU 2
>>>> -----                                      -----
>>>> do_swap_page
>>>>   synchronous swap_readpage
>>>>     alloc_page_vma
>>>>                                    swapoff
>>>>                                      release swap_file, bdev, or ...
>>>
>>
>> Many thanks for quick review and reply!
>>
>>> Perhaps I'm missing something.  The release of swap_file, bdev etc
>>> happens after we have cleared the SWP_VALID bit in si->flags in 
>>> destroy_swap_extents
>>> if I read the swapoff code correctly.
>> Agree. Let's look this more close:
>> CPU1                                                         CPU2
>> -----                                                                -----
>> swap_readpage
>>   if (data_race(sis->flags & SWP_FS_OPS)) {
>>                                                              swapoff
>>                                                                p->swap_file 
>> = NULL;
>>     struct file *swap_file = sis->swap_file;
>>     struct address_space *mapping = swap_file->f_mapping;[oops!]
>>                                                                ...
>>                                                                p->flags = 0;
>>     ...
>>
>> Does this make sense for you?
> 
> p->swapfile = NULL happens after the 
> p->flags &= ~SWP_VALID, synchronize_rcu(), destroy_swap_extents() sequence in 
> swapoff().
> 
> So I don't think the sequence you illustrated on CPU2 is in the right order.
> That said, without get_swap_device/put_swap_device in swap_readpage, you could
> potentially blow pass synchronize_rcu() on CPU2 and causes a problem.  so I 
> think
> the problematic race looks something like the following:
> 
> 
> CPU1                                                          CPU2
> -----                                                         -----
> swap_readpage
>   if (data_race(sis->flags & SWP_FS_OPS)) {
>                                                               swapoff
>                                                                 p->flags = &= 
> ~SWP_VALID;
>                                                                 ..
>                                                                 
> synchronize_rcu();
>                                                                 ..
>                                                                 p->swap_file 
> = NULL;
>     struct file *swap_file = sis->swap_file;
>     struct address_space *mapping = swap_file->f_mapping;[oops!]
>                                                                 ...
>     ...
> 

Agree. This is also what I meant to illustrate. And you provide a better one. 
Many thanks!

> By adding get_swap_device/put_swap_device, then the race is fixed.
> 
> 
> CPU1                                                          CPU2
> -----                                                         -----
> swap_readpage
>   get_swap_device()
>   ..
>   if (data_race(sis->flags & SWP_FS_OPS)) {
>                                                               swapoff
>                                                                 p->flags = &= 
> ~SWP_VALID;
>                                                                 ..
>     struct file *swap_file = sis->swap_file;
>     struct address_space *mapping = swap_file->f_mapping;[valid value]
>   ..
>   put_swap_device()
>                                                                 
> synchronize_rcu();
>                                                                 ..
>                                                                 p->swap_file 
> = NULL;
> 
> 
>>
>>>>
>>>>       swap_readpage
>>>>    check sis->flags is ok
>>>>      access swap_file, bdev...[oops!]
>>>>                                        si->flags = 0
>>>
>>> This happens after we clear the si->flags
>>>                                     synchronize_rcu()
>>>                                     release swap_file, bdev, in 
>>> destroy_swap_extents()
>>>
>>> So I think if we have get_swap_device/put_swap_device in do_swap_page,
>>> it should fix the race you've pointed out here.  
>>> Then synchronize_rcu() will wait till we have completed do_swap_page and
>>> call put_swap_device.
>>
>> Right, get_swap_device/put_swap_device could fix this race. __But__ 
>> rcu_read_lock()
>> in get_swap_device() could disable preempt and do_swap_page() may take a 
>> really long
>> time because it involves I/O. It may not be acceptable to disable preempt 
>> for such a
>> long time. :(
> 
> I can see that it is not a good idea to hold rcu read lock for a long
> time over slow file I/O operation, which will be the side effect of
> introducing get/put_swap_device to swap_readpage.  So using percpu_ref
> will then be preferable for synchronization once we introduce 
> get/put_swap_device into swap_readpage.
> 

The sis->bdev should also be protected by get/put_swap_device. It has the 
similar
issue. And swap_slot_free_notify (called from callback end_swap_bio_read) would
race with swapoff too. So I use get/put_swap_device to protect swap_readpage 
until
file I/O operation is completed.

Thanks again!

> Tim
> .
> 

Reply via email to