[Ocfs2-devel] [PATCH v3] ocfs2/dlm: Optimization of code while free dead node locks.

2017-01-17 Thread Guozhonghua

Three loops can be optimized into one and its sub loops, so as small code can 
do the same work.

From 8a1e682503f4e5a5299fe8316cbf559f9b9701f1 Mon Sep 17 00:00:00 2001
From: Guozhonghua 
Date: Fri, 13 Jan 2017 11:27:32 +0800
Subject: [PATCH] Optimization of code while free dead locks, changed for
 reviews.


Signed-off-by: Guozhonghua 
---
 fs/ocfs2/dlm/dlmrecovery.c |   39 ++-
 1 file changed, 14 insertions(+), 25 deletions(-)

diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
index dd5cb8b..93b71dd 100644
--- a/fs/ocfs2/dlm/dlmrecovery.c
+++ b/fs/ocfs2/dlm/dlmrecovery.c
@@ -2268,6 +2268,8 @@ static void dlm_free_dead_locks(struct dlm_ctxt *dlm,
 {
struct dlm_lock *lock, *next;
unsigned int freed = 0;
+   struct list_head *queue = NULL;
+   int i;

/* this node is the lockres master:
 * 1) remove any stale locks for the dead node
@@ -2280,31 +2282,18 @@ static void dlm_free_dead_locks(struct dlm_ctxt *dlm,
 * to force the DLM_UNLOCK_FREE_LOCK action so as to free the locks */

/* TODO: check pending_asts, pending_basts here */
-   list_for_each_entry_safe(lock, next, >granted, list) {
-   if (lock->ml.node == dead_node) {
-   list_del_init(>list);
-   dlm_lock_put(lock);
-   /* Can't schedule DLM_UNLOCK_FREE_LOCK - do manually */
-   dlm_lock_put(lock);
-   freed++;
-   }
-   }
-   list_for_each_entry_safe(lock, next, >converting, list) {
-   if (lock->ml.node == dead_node) {
-   list_del_init(>list);
-   dlm_lock_put(lock);
-   /* Can't schedule DLM_UNLOCK_FREE_LOCK - do manually */
-   dlm_lock_put(lock);
-   freed++;
-   }
-   }
-   list_for_each_entry_safe(lock, next, >blocked, list) {
-   if (lock->ml.node == dead_node) {
-   list_del_init(>list);
-   dlm_lock_put(lock);
-   /* Can't schedule DLM_UNLOCK_FREE_LOCK - do manually */
-   dlm_lock_put(lock);
-   freed++;
+   for (i = DLM_GRANTED_LIST; i <= DLM_BLOCKED_LIST; i++) {
+   queue = dlm_list_idx_to_ptr(res, i);
+   list_for_each_entry_safe(lock, next, queue, list) {
+   if (lock->ml.node == dead_node) {
+   list_del_init(>list);
+   dlm_lock_put(lock);
+   /* Can't schedule DLM_UNLOCK_FREE_LOCK
+* do manually
+*/
+   dlm_lock_put(lock);
+   freed++;
+   }
}
}

--
1.7.9.5
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, 
which is
intended only for the person or entity whose address is listed above. Any use 
of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender
by phone or email immediately and delete it!
___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel

[Ocfs2-devel] [PATCH v4 2/2] ocfs2: fix deadlock issue when taking inode lock at vfs entry points

2017-01-17 Thread Eric Ren
Commit 743b5f1434f5 ("ocfs2: take inode lock in ocfs2_iop_set/get_acl()")
results in a deadlock, as the author "Tariq Saeed" realized shortly
after the patch was merged. The discussion happened here
(https://oss.oracle.com/pipermail/ocfs2-devel/2015-September/011085.html).

The reason why taking cluster inode lock at vfs entry points opens up
a self deadlock window, is explained in the previous patch of this
series.

So far, we have seen two different code paths that have this issue.
1. do_sys_open
 may_open
  inode_permission
   ocfs2_permission
ocfs2_inode_lock() <=== take PR
 generic_permission
  get_acl
   ocfs2_iop_get_acl
ocfs2_inode_lock() <=== take PR
2. fchmod|fchmodat
chmod_common
 notify_change
  ocfs2_setattr <=== take EX
   posix_acl_chmod
get_acl
 ocfs2_iop_get_acl <=== take PR
ocfs2_iop_set_acl <=== take EX

Fixes them by adding the tracking logic (in the previous patch) for
these funcs above, ocfs2_permission(), ocfs2_iop_[set|get]_acl(),
ocfs2_setattr().

Changes since v1:
- Let ocfs2_is_locked_by_me() just return true/false to indicate if the
process gets the cluster lock - suggested by: Joseph Qi 
and Junxiao Bi .

- Change "struct ocfs2_holder" to a more meaningful name "ocfs2_lock_holder",
suggested by: Junxiao Bi.

- Add debugging output at ocfs2_setattr() and ocfs2_permission() to
catch exceptional cases, suggested by: Junxiao Bi.

Changes since v2:
- Use new wrappers of tracking logic code, suggested by: Junxiao Bi.

Signed-off-by: Eric Ren 
Reviewed-by: Junxiao Bi 
Reviewed-by: Joseph Qi 
---
 fs/ocfs2/acl.c  | 29 +
 fs/ocfs2/file.c | 58 -
 2 files changed, 58 insertions(+), 29 deletions(-)

diff --git a/fs/ocfs2/acl.c b/fs/ocfs2/acl.c
index bed1fcb..dc22ba8 100644
--- a/fs/ocfs2/acl.c
+++ b/fs/ocfs2/acl.c
@@ -283,16 +283,14 @@ int ocfs2_set_acl(handle_t *handle,
 int ocfs2_iop_set_acl(struct inode *inode, struct posix_acl *acl, int type)
 {
struct buffer_head *bh = NULL;
-   int status = 0;
+   int status, had_lock;
+   struct ocfs2_lock_holder oh;
 
-   status = ocfs2_inode_lock(inode, , 1);
-   if (status < 0) {
-   if (status != -ENOENT)
-   mlog_errno(status);
-   return status;
-   }
+   had_lock = ocfs2_inode_lock_tracker(inode, , 1, );
+   if (had_lock < 0)
+   return had_lock;
status = ocfs2_set_acl(NULL, inode, bh, type, acl, NULL, NULL);
-   ocfs2_inode_unlock(inode, 1);
+   ocfs2_inode_unlock_tracker(inode, 1, , had_lock);
brelse(bh);
return status;
 }
@@ -302,21 +300,20 @@ struct posix_acl *ocfs2_iop_get_acl(struct inode *inode, 
int type)
struct ocfs2_super *osb;
struct buffer_head *di_bh = NULL;
struct posix_acl *acl;
-   int ret;
+   int had_lock;
+   struct ocfs2_lock_holder oh;
 
osb = OCFS2_SB(inode->i_sb);
if (!(osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL))
return NULL;
-   ret = ocfs2_inode_lock(inode, _bh, 0);
-   if (ret < 0) {
-   if (ret != -ENOENT)
-   mlog_errno(ret);
-   return ERR_PTR(ret);
-   }
+
+   had_lock = ocfs2_inode_lock_tracker(inode, _bh, 0, );
+   if (had_lock < 0)
+   return ERR_PTR(had_lock);
 
acl = ocfs2_get_acl_nolock(inode, type, di_bh);
 
-   ocfs2_inode_unlock(inode, 0);
+   ocfs2_inode_unlock_tracker(inode, 0, , had_lock);
brelse(di_bh);
return acl;
 }
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index c488965..7b6a146 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1138,6 +1138,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr 
*attr)
handle_t *handle = NULL;
struct dquot *transfer_to[MAXQUOTAS] = { };
int qtype;
+   int had_lock;
+   struct ocfs2_lock_holder oh;
 
trace_ocfs2_setattr(inode, dentry,
(unsigned long long)OCFS2_I(inode)->ip_blkno,
@@ -1173,11 +1175,30 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr 
*attr)
}
}
 
-   status = ocfs2_inode_lock(inode, , 1);
-   if (status < 0) {
-   if (status != -ENOENT)
-   mlog_errno(status);
+   had_lock = ocfs2_inode_lock_tracker(inode, , 1, );
+   if (had_lock < 0) {
+   status = had_lock;
goto bail_unlock_rw;
+   } else if (had_lock) {
+   /*
+* As far as we know, ocfs2_setattr() could only be the first
+* VFS entry point in the call chain of recursive cluster
+* locking issue.
+*
+* For instance:
+* 

[Ocfs2-devel] [PATCH v4 0/2] fix deadlock caused by recursive cluster locking

2017-01-17 Thread Eric Ren
Hi Andrew,

This patch set version has got reviewed by Joseph and Junxiao Bi. I
think it's good to queued up now.

Thanks for all of you!
Eric

This is a formal patch set v2 to solve the deadlock issue on which I
previously started a RFC (draft patch), and the discussion happened here:
[https://oss.oracle.com/pipermail/ocfs2-devel/2016-October/012455.html]

Compared to the previous draft patch, this one is much simple and neat.
It neither messes up the dlmglue core, nor has a performance penalty on
the whole cluster locking system. Instead, it is only used in places where
such recursive cluster locking may happen.
 
Changes since v1: 
- Let ocfs2_is_locked_by_me() just return true/false to indicate if the
process gets the cluster lock - suggested by: Joseph Qi 
and Junxiao Bi .
 
- Change "struct ocfs2_holder" to a more meaningful name "ocfs2_lock_holder",
suggested by: Junxiao Bi. 
 
- Add debugging output at ocfs2_setattr() and ocfs2_permission() to
catch exceptional cases, suggested by: Junxiao Bi. 
 
- Do not inline functions whose bodies are not in scope, changed by:
Stephen Rothwell .

Changes since v2: 
- Use new wrappers of tracking logic code, suggested by: Junxiao Bi.

Change since v3:
- Fixes redundant space, spotted by: Joseph Qi.
 
Your comments and feedbacks are always welcomed.

Eric Ren (2):
  ocfs2/dlmglue: prepare tracking logic to avoid recursive cluster lock
  ocfs2: fix deadlock issue when taking inode lock at vfs entry points

 fs/ocfs2/acl.c |  29 +++
 fs/ocfs2/dlmglue.c | 105 +++--
 fs/ocfs2/dlmglue.h |  18 +
 fs/ocfs2/file.c|  58 ++---
 fs/ocfs2/ocfs2.h   |   1 +
 5 files changed, 179 insertions(+), 32 deletions(-)

-- 
2.10.2


___
Ocfs2-devel mailing list
Ocfs2-devel@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-devel


[Ocfs2-devel] [PATCH v4 1/2] ocfs2/dlmglue: prepare tracking logic to avoid recursive cluster lock

2017-01-17 Thread Eric Ren
We are in the situation that we have to avoid recursive cluster locking,
but there is no way to check if a cluster lock has been taken by a
precess already.

Mostly, we can avoid recursive locking by writing code carefully.
However, we found that it's very hard to handle the routines that
are invoked directly by vfs code. For instance:

const struct inode_operations ocfs2_file_iops = {
.permission = ocfs2_permission,
.get_acl= ocfs2_iop_get_acl,
.set_acl= ocfs2_iop_set_acl,
};

Both ocfs2_permission() and ocfs2_iop_get_acl() call ocfs2_inode_lock(PR):
do_sys_open
 may_open
  inode_permission
   ocfs2_permission
ocfs2_inode_lock() <=== first time
 generic_permission
  get_acl
   ocfs2_iop_get_acl
ocfs2_inode_lock() <=== recursive one

A deadlock will occur if a remote EX request comes in between two
of ocfs2_inode_lock(). Briefly describe how the deadlock is formed:

On one hand, OCFS2_LOCK_BLOCKED flag of this lockres is set in
BAST(ocfs2_generic_handle_bast) when downconvert is started
on behalf of the remote EX lock request. Another hand, the recursive
cluster lock (the second one) will be blocked in in __ocfs2_cluster_lock()
because of OCFS2_LOCK_BLOCKED. But, the downconvert never complete, why?
because there is no chance for the first cluster lock on this node to be
unlocked - we block ourselves in the code path.

The idea to fix this issue is mostly taken from gfs2 code.
1. introduce a new field: struct ocfs2_lock_res.l_holders, to
keep track of the processes' pid  who has taken the cluster lock
of this lock resource;
2. introduce a new flag for ocfs2_inode_lock_full: OCFS2_META_LOCK_GETBH;
it means just getting back disk inode bh for us if we've got cluster lock.
3. export a helper: ocfs2_is_locked_by_me() is used to check if we
have got the cluster lock in the upper code path.

The tracking logic should be used by some of the ocfs2 vfs's callbacks,
to solve the recursive locking issue cuased by the fact that vfs routines
can call into each other.

The performance penalty of processing the holder list should only be seen
at a few cases where the tracking logic is used, such as get/set acl.

You may ask what if the first time we got a PR lock, and the second time
we want a EX lock? fortunately, this case never happens in the real world,
as far as I can see, including permission check, (get|set)_(acl|attr), and
the gfs2 code also do so.

Changes since v1:
- Let ocfs2_is_locked_by_me() just return true/false to indicate if the
process gets the cluster lock - suggested by: Joseph Qi 
and Junxiao Bi .

- Change "struct ocfs2_holder" to a more meaningful name "ocfs2_lock_holder",
suggested by: Junxiao Bi.

- Do not inline functions whose bodies are not in scope, changed by:
Stephen Rothwell .

Changes since v2:
- Wrap the tracking logic code of recursive locking into functions,
ocfs2_inode_lock_tracker() and ocfs2_inode_unlock_tracker(),
suggested by: Junxiao Bi.

Change since v3:
- Fixes redundant space, spotted by: Joseph Qi.

[s...@canb.auug.org.au remove some inlines]
Signed-off-by: Eric Ren 
Reviewed-by: Junxiao Bi 
Reviewed-by: Joseph Qi 
---
 fs/ocfs2/dlmglue.c | 105 +++--
 fs/ocfs2/dlmglue.h |  18 +
 fs/ocfs2/ocfs2.h   |   1 +
 3 files changed, 121 insertions(+), 3 deletions(-)

diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
index 77d1632..8dce409 100644
--- a/fs/ocfs2/dlmglue.c
+++ b/fs/ocfs2/dlmglue.c
@@ -532,6 +532,7 @@ void ocfs2_lock_res_init_once(struct ocfs2_lock_res *res)
init_waitqueue_head(>l_event);
INIT_LIST_HEAD(>l_blocked_list);
INIT_LIST_HEAD(>l_mask_waiters);
+   INIT_LIST_HEAD(>l_holders);
 }
 
 void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
@@ -749,6 +750,50 @@ void ocfs2_lock_res_free(struct ocfs2_lock_res *res)
res->l_flags = 0UL;
 }
 
+/*
+ * Keep a list of processes who have interest in a lockres.
+ * Note: this is now only uesed for check recursive cluster locking.
+ */
+static inline void ocfs2_add_holder(struct ocfs2_lock_res *lockres,
+  struct ocfs2_lock_holder *oh)
+{
+   INIT_LIST_HEAD(>oh_list);
+   oh->oh_owner_pid = get_pid(task_pid(current));
+
+   spin_lock(>l_lock);
+   list_add_tail(>oh_list, >l_holders);
+   spin_unlock(>l_lock);
+}
+
+static inline void ocfs2_remove_holder(struct ocfs2_lock_res *lockres,
+  struct ocfs2_lock_holder *oh)
+{
+   spin_lock(>l_lock);
+   list_del(>oh_list);
+   spin_unlock(>l_lock);
+
+   put_pid(oh->oh_owner_pid);
+}
+
+static inline int ocfs2_is_locked_by_me(struct ocfs2_lock_res *lockres)
+{
+   struct ocfs2_lock_holder *oh;
+   struct pid *pid;
+
+   /* look in the list of holders for one with the current task as owner */
+ 

Re: [Ocfs2-devel] [PATCH v3 1/2] ocfs2/dlmglue: prepare tracking logic to avoid recursive cluster lock

2017-01-17 Thread Eric Ren
Hi!

On 01/17/2017 04:43 PM, Joseph Qi wrote:
> On 17/1/17 15:55, Eric Ren wrote:
>> Hi!
>>
>> On 01/17/2017 03:39 PM, Joseph Qi wrote:
>>>
>>> On 17/1/17 14:30, Eric Ren wrote:
 We are in the situation that we have to avoid recursive cluster locking,
 but there is no way to check if a cluster lock has been taken by a
 precess already.

 Mostly, we can avoid recursive locking by writing code carefully.
 However, we found that it's very hard to handle the routines that
 are invoked directly by vfs code. For instance:

 const struct inode_operations ocfs2_file_iops = {
  .permission = ocfs2_permission,
  .get_acl= ocfs2_iop_get_acl,
  .set_acl= ocfs2_iop_set_acl,
 };

 Both ocfs2_permission() and ocfs2_iop_get_acl() call ocfs2_inode_lock(PR):
 do_sys_open
   may_open
inode_permission
 ocfs2_permission
  ocfs2_inode_lock() <=== first time
   generic_permission
get_acl
 ocfs2_iop_get_acl
 ocfs2_inode_lock() <=== recursive one

 A deadlock will occur if a remote EX request comes in between two
 of ocfs2_inode_lock(). Briefly describe how the deadlock is formed:

 On one hand, OCFS2_LOCK_BLOCKED flag of this lockres is set in
 BAST(ocfs2_generic_handle_bast) when downconvert is started
 on behalf of the remote EX lock request. Another hand, the recursive
 cluster lock (the second one) will be blocked in in __ocfs2_cluster_lock()
 because of OCFS2_LOCK_BLOCKED. But, the downconvert never complete, why?
 because there is no chance for the first cluster lock on this node to be
 unlocked - we block ourselves in the code path.

 The idea to fix this issue is mostly taken from gfs2 code.
 1. introduce a new field: struct ocfs2_lock_res.l_holders, to
 keep track of the processes' pid  who has taken the cluster lock
 of this lock resource;
 2. introduce a new flag for ocfs2_inode_lock_full: OCFS2_META_LOCK_GETBH;
 it means just getting back disk inode bh for us if we've got cluster lock.
 3. export a helper: ocfs2_is_locked_by_me() is used to check if we
 have got the cluster lock in the upper code path.

 The tracking logic should be used by some of the ocfs2 vfs's callbacks,
 to solve the recursive locking issue cuased by the fact that vfs routines
 can call into each other.

 The performance penalty of processing the holder list should only be seen
 at a few cases where the tracking logic is used, such as get/set acl.

 You may ask what if the first time we got a PR lock, and the second time
 we want a EX lock? fortunately, this case never happens in the real world,
 as far as I can see, including permission check, (get|set)_(acl|attr), and
 the gfs2 code also do so.

 Changes since v1:
 - Let ocfs2_is_locked_by_me() just return true/false to indicate if the
 process gets the cluster lock - suggested by: Joseph Qi 
 
 and Junxiao Bi .

 - Change "struct ocfs2_holder" to a more meaningful name 
 "ocfs2_lock_holder",
 suggested by: Junxiao Bi.

 - Do not inline functions whose bodies are not in scope, changed by:
 Stephen Rothwell .

 Changes since v2:
 - Wrap the tracking logic code of recursive locking into functions,
 ocfs2_inode_lock_tracker() and ocfs2_inode_unlock_tracker(),
 suggested by: Junxiao Bi.

 [s...@canb.auug.org.au remove some inlines]
 Signed-off-by: Eric Ren 
 ---
   fs/ocfs2/dlmglue.c | 105 
 +++--
   fs/ocfs2/dlmglue.h |  18 +
   fs/ocfs2/ocfs2.h   |   1 +
   3 files changed, 121 insertions(+), 3 deletions(-)

 diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
 index 77d1632..c75b9e9 100644
 --- a/fs/ocfs2/dlmglue.c
 +++ b/fs/ocfs2/dlmglue.c
 @@ -532,6 +532,7 @@ void ocfs2_lock_res_init_once(struct ocfs2_lock_res 
 *res)
   init_waitqueue_head(>l_event);
   INIT_LIST_HEAD(>l_blocked_list);
   INIT_LIST_HEAD(>l_mask_waiters);
 +INIT_LIST_HEAD(>l_holders);
   }
 void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
 @@ -749,6 +750,50 @@ void ocfs2_lock_res_free(struct ocfs2_lock_res *res)
   res->l_flags = 0UL;
   }
   +/*
 + * Keep a list of processes who have interest in a lockres.
 + * Note: this is now only uesed for check recursive cluster locking.
 + */
 +static inline void ocfs2_add_holder(struct ocfs2_lock_res *lockres,
 +   struct ocfs2_lock_holder *oh)
 +{
 +INIT_LIST_HEAD(>oh_list);
 +oh->oh_owner_pid =  get_pid(task_pid(current));
>>> Trim the redundant space here.
>>
>> You mean the blank line here? If 

Re: [Ocfs2-devel] [PATCH v3 1/2] ocfs2/dlmglue: prepare tracking logic to avoid recursive cluster lock

2017-01-17 Thread Joseph Qi
On 17/1/17 15:55, Eric Ren wrote:
> Hi!
>
> On 01/17/2017 03:39 PM, Joseph Qi wrote:
>>
>> On 17/1/17 14:30, Eric Ren wrote:
>>> We are in the situation that we have to avoid recursive cluster 
>>> locking,
>>> but there is no way to check if a cluster lock has been taken by a
>>> precess already.
>>>
>>> Mostly, we can avoid recursive locking by writing code carefully.
>>> However, we found that it's very hard to handle the routines that
>>> are invoked directly by vfs code. For instance:
>>>
>>> const struct inode_operations ocfs2_file_iops = {
>>>  .permission = ocfs2_permission,
>>>  .get_acl= ocfs2_iop_get_acl,
>>>  .set_acl= ocfs2_iop_set_acl,
>>> };
>>>
>>> Both ocfs2_permission() and ocfs2_iop_get_acl() call 
>>> ocfs2_inode_lock(PR):
>>> do_sys_open
>>>   may_open
>>>inode_permission
>>> ocfs2_permission
>>>  ocfs2_inode_lock() <=== first time
>>>   generic_permission
>>>get_acl
>>> ocfs2_iop_get_acl
>>> ocfs2_inode_lock() <=== recursive one
>>>
>>> A deadlock will occur if a remote EX request comes in between two
>>> of ocfs2_inode_lock(). Briefly describe how the deadlock is formed:
>>>
>>> On one hand, OCFS2_LOCK_BLOCKED flag of this lockres is set in
>>> BAST(ocfs2_generic_handle_bast) when downconvert is started
>>> on behalf of the remote EX lock request. Another hand, the recursive
>>> cluster lock (the second one) will be blocked in in 
>>> __ocfs2_cluster_lock()
>>> because of OCFS2_LOCK_BLOCKED. But, the downconvert never complete, 
>>> why?
>>> because there is no chance for the first cluster lock on this node 
>>> to be
>>> unlocked - we block ourselves in the code path.
>>>
>>> The idea to fix this issue is mostly taken from gfs2 code.
>>> 1. introduce a new field: struct ocfs2_lock_res.l_holders, to
>>> keep track of the processes' pid  who has taken the cluster lock
>>> of this lock resource;
>>> 2. introduce a new flag for ocfs2_inode_lock_full: 
>>> OCFS2_META_LOCK_GETBH;
>>> it means just getting back disk inode bh for us if we've got cluster 
>>> lock.
>>> 3. export a helper: ocfs2_is_locked_by_me() is used to check if we
>>> have got the cluster lock in the upper code path.
>>>
>>> The tracking logic should be used by some of the ocfs2 vfs's callbacks,
>>> to solve the recursive locking issue cuased by the fact that vfs 
>>> routines
>>> can call into each other.
>>>
>>> The performance penalty of processing the holder list should only be 
>>> seen
>>> at a few cases where the tracking logic is used, such as get/set acl.
>>>
>>> You may ask what if the first time we got a PR lock, and the second 
>>> time
>>> we want a EX lock? fortunately, this case never happens in the real 
>>> world,
>>> as far as I can see, including permission check, 
>>> (get|set)_(acl|attr), and
>>> the gfs2 code also do so.
>>>
>>> Changes since v1:
>>> - Let ocfs2_is_locked_by_me() just return true/false to indicate if the
>>> process gets the cluster lock - suggested by: Joseph Qi 
>>> 
>>> and Junxiao Bi .
>>>
>>> - Change "struct ocfs2_holder" to a more meaningful name 
>>> "ocfs2_lock_holder",
>>> suggested by: Junxiao Bi.
>>>
>>> - Do not inline functions whose bodies are not in scope, changed by:
>>> Stephen Rothwell .
>>>
>>> Changes since v2:
>>> - Wrap the tracking logic code of recursive locking into functions,
>>> ocfs2_inode_lock_tracker() and ocfs2_inode_unlock_tracker(),
>>> suggested by: Junxiao Bi.
>>>
>>> [s...@canb.auug.org.au remove some inlines]
>>> Signed-off-by: Eric Ren 
>>> ---
>>>   fs/ocfs2/dlmglue.c | 105 
>>> +++--
>>>   fs/ocfs2/dlmglue.h |  18 +
>>>   fs/ocfs2/ocfs2.h   |   1 +
>>>   3 files changed, 121 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
>>> index 77d1632..c75b9e9 100644
>>> --- a/fs/ocfs2/dlmglue.c
>>> +++ b/fs/ocfs2/dlmglue.c
>>> @@ -532,6 +532,7 @@ void ocfs2_lock_res_init_once(struct 
>>> ocfs2_lock_res *res)
>>>   init_waitqueue_head(>l_event);
>>>   INIT_LIST_HEAD(>l_blocked_list);
>>>   INIT_LIST_HEAD(>l_mask_waiters);
>>> +INIT_LIST_HEAD(>l_holders);
>>>   }
>>> void ocfs2_inode_lock_res_init(struct ocfs2_lock_res *res,
>>> @@ -749,6 +750,50 @@ void ocfs2_lock_res_free(struct ocfs2_lock_res 
>>> *res)
>>>   res->l_flags = 0UL;
>>>   }
>>>   +/*
>>> + * Keep a list of processes who have interest in a lockres.
>>> + * Note: this is now only uesed for check recursive cluster locking.
>>> + */
>>> +static inline void ocfs2_add_holder(struct ocfs2_lock_res *lockres,
>>> +   struct ocfs2_lock_holder *oh)
>>> +{
>>> +INIT_LIST_HEAD(>oh_list);
>>> +oh->oh_owner_pid =  get_pid(task_pid(current));
>> Trim the redundant space here.
>
> You mean the blank line here? If so, I am OK to make the change. But,
> I'm little worried about people may feel annoyed if I send the