On Thu, Aug 8, 2024 at 1:50 PM Jason Wang <jasow...@redhat.com> wrote:
>
> On Wed, Aug 7, 2024 at 2:54 PM Yongji Xie <xieyon...@bytedance.com> wrote:
> >
> > On Wed, Aug 7, 2024 at 12:38 PM Jason Wang <jasow...@redhat.com> wrote:
> > >
> > > On Wed, Aug 7, 2024 at 11:13 AM Yongji Xie <xieyon...@bytedance.com> 
> > > wrote:
> > > >
> > > > On Wed, Aug 7, 2024 at 10:39 AM Jason Wang <jasow...@redhat.com> wrote:
> > > > >
> > > > > On Tue, Aug 6, 2024 at 11:10 AM Yongji Xie <xieyon...@bytedance.com> 
> > > > > wrote:
> > > > > >
> > > > > > On Tue, Aug 6, 2024 at 10:28 AM Jason Wang <jasow...@redhat.com> 
> > > > > > wrote:
> > > > > > >
> > > > > > > On Mon, Aug 5, 2024 at 6:42 PM Yongji Xie 
> > > > > > > <xieyon...@bytedance.com> wrote:
> > > > > > > >
> > > > > > > > On Mon, Aug 5, 2024 at 4:24 PM Jason Wang <jasow...@redhat.com> 
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Aug 5, 2024 at 4:21 PM Jason Wang 
> > > > > > > > > <jasow...@redhat.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Barry said [1]:
> > > > > > > > > >
> > > > > > > > > > """
> > > > > > > > > > mm doesn't support non-blockable __GFP_NOFAIL allocation. 
> > > > > > > > > > Because
> > > > > > > > > > __GFP_NOFAIL without direct reclamation may just result in 
> > > > > > > > > > a busy
> > > > > > > > > > loop within non-sleepable contexts.
> > > > > > > > > > ""“
> > > > > > > > > >
> > > > > > > > > > Unfortuantely, we do that under read lock. A possible way 
> > > > > > > > > > to fix that
> > > > > > > > > > is to move the pages allocation out of the lock into the 
> > > > > > > > > > caller, but
> > > > > > > > > > having to allocate a huge number of pages and auxiliary 
> > > > > > > > > > page array
> > > > > > > > > > seems to be problematic as well per Tetsuon [2]:
> > > > > > > > > >
> > > > > > > > > > """
> > > > > > > > > > You should implement proper error handling instead of using
> > > > > > > > > > __GFP_NOFAIL if count can become large.
> > > > > > > > > > """
> > > > > > > > > >
> > > > > > > >
> > > > > > > > I think the problem is it's hard to do the error handling in
> > > > > > > > fops->release() currently.
> > > > > > >
> > > > > > > vduse_dev_dereg_umem() should be the same, it's very hard to 
> > > > > > > allow it to fail.
> > > > > > >
> > > > > > > >
> > > > > > > > So can we temporarily hold the user page refcount, and release 
> > > > > > > > it when
> > > > > > > > vduse_dev_open()/vduse_domain_release() is executed. The kernel 
> > > > > > > > page
> > > > > > > > allocation and memcpy can be done in vduse_dev_open() which 
> > > > > > > > allows
> > > > > > > > some error handling.
> > > > > > >
> > > > > > > Just to make sure I understand this, the free is probably not the 
> > > > > > > big
> > > > > > > issue but the allocation itself.
> > > > > > >
> > > > > >
> > > > > > Yes, so defer the allocation might be a solution.
> > > > >
> > > > > Would you mind posting a patch for this?
> > > > >
> > > > > >
> > > > > > > And if we do the memcpy() in open(), it seems to be a subtle 
> > > > > > > userspace
> > > > > > > noticeable change? (Or I don't get how copying in 
> > > > > > > vduse_dev_open() can
> > > > > > > help here).
> > > > > > >
> > > > > >
> > > > > > Maybe we don't need to do the copy in open(). We can hold the user
> > > > > > page refcount until the inflight I/O is completed. That means the
> > > > > > allocation of new kernel pages can be done in
> > > > > > vduse_domain_map_bounce_page() and the release of old user pages can
> > > > > > be done in vduse_domain_unmap_bounce_page().
> > > > >
> > > > > This seems to be a subtle userspace noticeable behaviour?
> > > > >
> > > >
> > > > Yes, userspace needs to ensure that it does not reuse the old user
> > > > pages for other purposes before vduse_dev_dereg_umem() returns
> > > > successfully. The vduse_dev_dereg_umem() will only return successfully
> > > > when there is no inflight I/O which means we don't need to allocate
> > > > extra kernel pages to store data. If we can't accept this, then your
> > > > current patch might be the most suitable.
> > >
> > > It might be better to not break.
> > >
> > > Actually during my testing, the read_lock in the do_bounce path slows
> > > down the performance. Remove read_lock or use rcu_read_lock() to give
> > > 20% improvement of PPS.
> > >
> >
> > Looks like rcu_read_lock() should be OK here.
>
> The tricky part is that we may still end up behaviour changes (or lose
> some of the synchronization between kernel and bounce pages):
>
> RCU allows the read to be executed in parallel with the writer. So
> bouncing could be done in parallel with
> vduse_domain_add_user_bounce_pages(), there would be a race in two
> memcpy.
>

Hmm...this is a problem. We may still need some userspace noticeable
behaviour, e.g. only allowing reg_umem/dereg_umem when the device is
not started.

Thanks,
Yongji

Reply via email to