Well... Maybe. Let's check, how it works with our kernel.
07.11.2017 10:16, Konstantin Khorenko пишет: > Going to send it to mainstream as well? > > -- > Best regards, > > Konstantin Khorenko, > Virtuozzo Linux Kernel Team > > On 11/03/2017 07:47 PM, Stanislav Kinsburskiy wrote: >> From: Stanislav Kinsburskiy <skinsbur...@parallels.com> >> >> The problem is that per-net SUNRPC transports shutdown is done regardless >> current callback execution. This is a race leading to transport >> use-after-free >> in callback handler. >> This patch fixes it in stright-forward way. I.e. it protects callback >> execution with the same mutex used for per-net data creation and destruction. >> Hopefully, it won't slow down NFS client significantly. >> >> https://jira.sw.ru/browse/PSBM-75751 >> >> Signed-off-by: Stanislav Kinsburskiy <skinsbur...@parallels.com> >> --- >> fs/nfs/callback.c | 3 +++ >> 1 file changed, 3 insertions(+) >> >> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c >> index 0beb275..82e8ed1 100644 >> --- a/fs/nfs/callback.c >> +++ b/fs/nfs/callback.c >> @@ -118,6 +118,7 @@ nfs41_callback_svc(void *vrqstp) >> continue; >> >> prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE); >> + mutex_lock(&nfs_callback_mutex); >> spin_lock_bh(&serv->sv_cb_lock); >> if (!list_empty(&serv->sv_cb_list)) { >> req = list_first_entry(&serv->sv_cb_list, >> @@ -129,8 +130,10 @@ nfs41_callback_svc(void *vrqstp) >> error = bc_svc_process(serv, req, rqstp); >> dprintk("bc_svc_process() returned w/ error code= %d\n", >> error); >> + mutex_unlock(&nfs_callback_mutex); >> } else { >> spin_unlock_bh(&serv->sv_cb_lock); >> + mutex_unlock(&nfs_callback_mutex); >> schedule(); >> finish_wait(&serv->sv_cb_waitq, &wq); >> } >> >> _______________________________________________ >> Devel mailing list >> Devel@openvz.org >> https://lists.openvz.org/mailman/listinfo/devel >> . >> _______________________________________________ Devel mailing list Devel@openvz.org https://lists.openvz.org/mailman/listinfo/devel