On Thu, 11 Jul 2013 16:54:04 +0400
Pavel Shilovsky <[email protected]> wrote:

> If we request reading or writing on a file that needs to be
> reopened, it causes the deadlock: we are already holding rw
> semaphore for reading and then we try to acquire it for writing
> in cifs_relock_file. Fix this by acquiring the semaphore for
> reading in cifs_relock_file due to we don't make any changes in
> locks and don't need a write access.
> 
> Signed-off-by: Pavel Shilovsky <[email protected]>
> ---
>  fs/cifs/file.c |    9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/cifs/file.c b/fs/cifs/file.c
> index b149ae4..be24f09 100644
> --- a/fs/cifs/file.c
> +++ b/fs/cifs/file.c
> @@ -561,11 +561,10 @@ cifs_relock_file(struct cifsFileInfo *cfile)
>       struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
>       int rc = 0;
>  
> -     /* we are going to update can_cache_brlcks here - need a write access */
> -     down_write(&cinode->lock_sem);
> +     down_read(&cinode->lock_sem);
>       if (cinode->can_cache_brlcks) {
> -             /* can cache locks - no need to push them */
> -             up_write(&cinode->lock_sem);
> +             /* can cache locks - no need to relock */
> +             up_read(&cinode->lock_sem);
>               return rc;
>       }
>  
> @@ -576,7 +575,7 @@ cifs_relock_file(struct cifsFileInfo *cfile)
>       else
>               rc = tcon->ses->server->ops->push_mand_locks(cfile);
>  
> -     up_write(&cinode->lock_sem);
> +     up_read(&cinode->lock_sem);
>       return rc;
>  }
>  

Ok, I think this will work, but I'm a little leery of recursively
taking the same semaphore twice for read. This really could use a
rethink as getting this locking wrong is a great source of subtle bugs.

Maybe you could consider using RCU instead?

Acked-by: Jeff Layton <[email protected]>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to