On 02/27/2012 02:58 PM, Suleiman Souhlal wrote:
> Signed-off-by: Suleiman Souhlal
> ---
> Documentation/cgroups/memory.txt | 44 +++--
> 1 files changed, 41 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/cgroups/memory.txt
> b/Documentation/cgrou
27.02.2012 22:05, Stanislav Kinsbursky пишет:
v3:
1) Lookup for client is performed from the beginning of the list on each PipeFS
event handling operation.
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-
There are 2 tightly bound objects: pipe data (created for kernel needs, has
reference to dentry, which depends on PipeFS mount/umount) and PipeFS
dentry/inode pair (created on mount for user-space needs). They both
independently may have or have not a valid reference to each other.
This means, that
Currently, wait queue, used for polling of RPC pipe changes from user-space,
is a part of RPC pipe. But the pipe data itself can be released on NFS umount
prior to dentry-inode pair, connected to it (is case of this pair is open by
some process).
This is not a problem for almost all pipe users, bec
v3:
1) Lookup for client is performed from the beginning of the list on each PipeFS
event handling operation.
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
v3:
1) Lookup for client is performed from the beginning of the list on each PipeFS
event handling operation.
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
v3:
1) Lookup for client is performed from the beginning of the list on each PipeFS
event handling operation.
v2:
1) Prior to calling PipeFS dentry rountines (for both type of clients - SUNPRC
and NFS) get the client and drop the list lock instead of replacing per-net
locks by mutexes.
First two
On Mon, 2012-02-27 at 20:55 +0400, Stanislav Kinsbursky wrote:
> 27.02.2012 20:21, Myklebust, Trond пишет:
> > On Mon, 2012-02-27 at 19:50 +0400, Stanislav Kinsbursky wrote:
> >> Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
> >> creation, which can be called on mount noti
> Gmmm.
> Please, correct me, if I'm wrong, that you are proposing
> something like this:
>
> spin_lock(&sn->rpc_client_lock);
> again:
> list_for_each_entry(clnt,&sn->all_clients, cl_clients) {
> if ((event == RPC_PIPEFS_MOUNT) && clnt->cl_dentry) ||
>
27.02.2012 20:21, Myklebust, Trond пишет:
On Mon, 2012-02-27 at 19:50 +0400, Stanislav Kinsbursky wrote:
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Sig
On Mon, 2012-02-27 at 19:50 +0400, Stanislav Kinsbursky wrote:
> Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
> creation, which can be called on mount notification, where this per-net client
> lock is taken on clients list walk.
>
> Signed-off-by: Stanislav Kinsbursky
>
27.02.2012 19:59, David Laight пишет:
spin_lock(&nn->nfs_client_lock);
- list_for_each_entry(clp,&nn->nfs_client_list, cl_share_link) {
+ list_for_each_entry_safe(clp, tmp,&nn->nfs_client_list,
cl_share_link) {
if (clp->rpc_ops !=&nfs_v4_clientops)
> spin_lock(&nn->nfs_client_lock);
> - list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
> + list_for_each_entry_safe(clp, tmp, &nn->nfs_client_list,
cl_share_link) {
> if (clp->rpc_ops != &nfs_v4_clientops)
> continue;
> +
Currently, wait queue, used for polling of RPC pipe changes from user-space,
is a part of RPC pipe. But the pipe data itself can be released on NFS umount
prior to dentry-inode pair, connected to it (is case of this pair is open by
some process).
This is not a problem for almost all pipe users, bec
There are 2 tightly bound objects: pipe data (created for kernel needs, has
reference to dentry, which depends on PipeFS mount/umount) and PipeFS
dentry/inode pair (created on mount for user-space needs). They both
independently may have or have not a valid reference to each other.
This means, that
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Signed-off-by: Stanislav Kinsbursky
---
fs/nfs/client.c |2 +-
fs/nfs/idmap.c |8 ++--
2 files
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Signed-off-by: Stanislav Kinsbursky
---
net/sunrpc/clnt.c | 10 +++---
1 files changed, 7 insertions(+
v2:
1) Prior to calling PipeFS dentry rountines (for both type of clients - SUNPRC
and NFS) get the client and drop the list lock instead of replacing per-net
locks by mutexes.
First two pathes fixes lockdep warnings and next two - dereferencing of
released pipe data on eventfd close and in file
27.02.2012 19:00, Myklebust, Trond пишет:
On Mon, 2012-02-27 at 17:49 +0400, Stanislav Kinsbursky wrote:
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Not
On Mon, 2012-02-27 at 17:49 +0400, Stanislav Kinsbursky wrote:
> Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
> creation, which can be called on mount notification, where this per-net client
> lock is taken on clients list walk.
>
> Note: I used simple mutex instead of r
Currently, wait queue, used for polling of RPC pipe changes from user-space,
is a part of RPC pipe. But the pipe data itself can be released on NFS umount
prior to dentry-inode pair, connected to it (is case of this pair is open by
some process).
This is not a problem for almost all pipe users, bec
There are 2 tightly bound objects: pipe data (created for kernel needs, has
reference to dentry, which depends on PipeFS mount/umount) and PipeFS
dentry/inode pair (created on mount for user-space needs). They both
independently may have or have not a valid reference to each other.
This means, that
First two pathes fixes lockdep warnings and next two - dereferencing of
released pipe data on eventfd close.
The following series consists of:
---
Stanislav Kinsbursky (4):
SUNRPC: replace per-net client lock by rw mutex
NFS: replace per-net client lock by mutex
SUNRPC: check R
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Note: I used simple mutex instead of rw semaphore because of
nfs_put_client->atomic_dec_and_mutex_lock() call.
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.
Signed-off-by: Stanislav Kinsbursky
---
net/sunrpc/clnt.c| 16
net/sunrpc/netns.h
25 matches
Mail list logo