Erik Braun:
> I tried to create a minimalistic example:
> http://users.minet.uni-jena.de/~erik/aufs/mv.log
>
> In the last line, the 'mv' command stalls.
>
> The corresponding kernel messages are in
> http://users.minet.uni-jena.de/~erik/aufs/netconsole.log

OK, this netconsole.log is different from previous one, but it shows the
problem more clearly.
- the global lock in openafs (which I mentioned previously) is unrelated
  to this problem.
- the cause of this problem is a newly added locking in
  openafs/src/afs/LINUX/osi_vm.c:osi_VM_FlushPages() on Jun 23 2010.
  see the openafs commit b0ed5a7f (below).
- in copy-up, aufs acquires i_mutex of the file in order to prevent
  being modified during copy-up, and tries opening the file for reading.
- since Jun 23 2010, openafs tries acquiring i_mutex of the file in
  opening a file.
And this problem happens.

In your (new) netconsole.log, you can see something like this.

[  481.710805]  [<ffffffff814f7b6a>] mutex_lock+0x1a/0x2a
[  481.712635]  [<ffffffffa0072e89>] osi_VM_FlushPages+0x19/0x40 [openafs]
[  481.714450]  [<ffffffffa003efb8>] osi_FlushPages+0x1e8/0x3b0 [openafs]
[  481.716263]  [<ffffffffa00592f5>] afs_open+0x155/0x6f0 [openafs]
[  481.718046]  [<ffffffffa0073ab9>] afs_linux_open+0x59/0xe0 [openafs]
[  481.719789]  [<ffffffff8118312f>] do_dentry_open+0x1bf/0x2b0
[  481.723179]  [<ffffffff81183274>] dentry_open+0x54/0xe0
[  481.724771]  [<ffffffffa064e339>] vfsub_dentry_open+0x19/0x20 [aufs]
[  481.726343]  [<ffffffffa065a957>] au_h_open+0x147/0x250 [aufs]
[  481.727890]  [<ffffffffa065061b>] au_cp_regular+0xab/0x1b0 [aufs]
[  481.732488]  [<ffffffffa0650916>] cpup_entry+0x1f6/0x5c0 [aufs]
[  481.735541]  [<ffffffffa0650f61>] au_cpup_single.constprop.17+0x281/0x700 
[aufs]
[  481.738472]  [<ffffffffa0651805>] au_cpup_simple+0x65/0xb0 [aufs]
[  481.739932]  [<ffffffffa0651973>] au_call_cpup_simple+0x23/0x40 [aufs]

Openafs:osi_VM_FlushPages() tries acquiring i_mutex by mutex_lock() (the
top of this trace), but this i_mutex is already acquired by aufs before
au_call_cpup_simple() (the bottom of this trace).


(a commit from openafs.git)

commit b0ed5a7facb1951f2f4ef8ed3da29a6a80cb7d49
Author: Rainer Toebbicke <r...@pclella.cern.ch>
Date:   Wed Jun 23 15:10:46 2010 +0200

    Protect truncate_inode_pages when called from osi_VM_FlushPages
    
    truncate_inode_pages requires the mapping to be protected using
    i_mutex / i_sem, which is not held whereever osi_FlushPages is called.
    
    Change-Id: I2ca59cf75633368efb7f6a17fd01c7c517a8f609
    Reviewed-on: http://gerrit.openafs.org/2244
    Reviewed-by: Derrick Brashear <sha...@dementia.org>
    Tested-by: Derrick Brashear <sha...@dementia.org>

diff --git a/src/afs/LINUX/osi_compat.h b/src/afs/LINUX/osi_compat.h
index 2d7382f..342cdb8 100644
--- a/src/afs/LINUX/osi_compat.h
+++ b/src/afs/LINUX/osi_compat.h
@@ -342,3 +342,21 @@ afs_init_sb_export_ops(struct super_block *sb) {
        sb->s_export_op->find_exported_dentry = find_exported_dentry;
 #endif
 }
+
+static inline void
+afs_linux_lock_inode(struct inode *ip) {
+#ifdef STRUCT_INODE_HAS_I_MUTEX
+    mutex_lock(&ip->i_mutex);
+#else
+    down(&ip->i_sem);
+#endif
+}
+
+static inline void
+afs_linux_unlock_inode(struct inode *ip) {
+#ifdef STRUCT_INODE_HAS_I_MUTEX
+    mutex_unlock(&ip->i_mutex);
+#else
+    up(&ip->i_sem);
+#endif
+}
diff --git a/src/afs/LINUX/osi_vm.c b/src/afs/LINUX/osi_vm.c
index 99d72c3..2cd34f9 100644
--- a/src/afs/LINUX/osi_vm.c
+++ b/src/afs/LINUX/osi_vm.c
@@ -15,6 +15,8 @@
 #include "afsincludes.h"       /* Afs-based standard headers */
 #include "afs/afs_stats.h"     /* statistics */
 
+#include "osi_compat.h"
+
 /* Linux VM operations
  *
  * The general model for Linux is to treat vm as a cache that's:
@@ -116,7 +118,9 @@ osi_VM_FlushPages(struct vcache *avc, afs_ucred_t *credp)
 {
     struct inode *ip = AFSTOV(avc);
     
+    afs_linux_lock_inode(ip);
     truncate_inode_pages(&ip->i_data, 0);
+    afs_linux_unlock_inode(ip);
 }
 
 /* Purge pages beyond end-of-file, when truncating a file.


J. R. Okajima

------------------------------------------------------------------------------

Reply via email to