Hi,

On 12/08/2019 14:43, Bob Peterson wrote:
----- Original Message -----
The real problem came with renames, though. Function
gfs2_rename(), which locked a series of inode glocks, did so
in parent-child order due to that patch. But it was still
possible to create circular lock dependencies just by doing the
wrong combination of renames on different nodes. For example:

Node a: mv /mnt/gfs2/sub /mnt/gfs2/tmp_name (rename sub to tmp_name)

a1. Same directory, so rename glock is NOT held
a2. /mnt/gfs2 is locked
a3. Tries to lock sub for rename, but it is locked on node b

Node b: mv /mnt/gfs2/sub /mnt/gfs2/dir1/ (move sub to dir1...
          mv /mnt/gfs2/dir1/sub /mnt/gfs2/  ...then move it back)

b1. Different directory, so rename glock IS held
b2. /mnt/gfs2 is locked
b3. dir1 is locked
b4. sub is moved to dir1 and everything is unlocked
b5. Different directory, so rename glock IS held again
b6. dir1 is locked
b7. Lock for /mnt/gfs2 is requested, but cannot be granted because
      node 1 locked it in step a2.
If the parents are being locked before the child, as per the correct
locking order, then this cannot happen. The directory in which the child
is located should always be locked first, before the child, so that is
what protects the operation on a from whatever might be going on, on node b.

When you get to step b7, sub is not locked (since it was unlocked in b4)
and not locked again. Thus a3 can complete. So this doesn't look like it
is the right explanation.
Hi,

I guess maybe my explanation is lacking.
It's not so much a relationship between "parent" and "child"
directories as it is "old" and "new" directories.

The comments for function vfs_rename() explain the situations in which
this can happen, and have been prevented on a single node through the
use of s_vfs_rename_mutex. However, that mutex is not cluster-wide,
which means the relationship of which inode is the "old" and which
inode is the "new" can change indiscriminately without notice and
without cluster-wide locking. The whole point of the "a" and "b"
scenarios was to illustrate that one node can lock "old", then "new",
but the other node can reverse the roles of those same inodes (which
is the "old" and which is the "new") and therefore reverse the lock
order without notice.

Since the old-new relationship itself is not protected, we need
some other way to get the lock order correct.

My first attempt to fix this was to extend the "rename" glock to have
a rename-wide reach so it affected both types of renames rather than
today's code which only locks old and new if they're different.
I implemented this with a new i_op called by vfs (vfs_rename) to make
the rename glock serve as a kind of cluster-wide version vfs's
s_vfs_rename_mutex. However, this ended up having a huge performance
penalty for my test.

My second attempt (the patch I posted) was to lock the inodes in
block-number-sort order, because the block number relationships
will never change, regardless of which is old and which is new.
It made no sense to me to reinvent the wheel wrt locking them in
sorted order, so I used gfs2_glock_nq_m which already does that.

Regards,

Bob Peterson

We are doing our best to get rid of the _m glock functions. Sorting things in block number order is a bit of a hack and it would be better to spend the time reducing the number of glocks involved in each operation overall.

I have wondered about the performance issues on the rename glock. Simply using that for everything is the obvious easy fix, but perhaps not surprising that you've seen some performance issues with that approach. I wonder if we can come up with a solution to break up the single rename glock into separate glocks using a hashing scheme, or some similar system. That way we might get the advantages of both improved speed and to retain the parent/child locking.

Either way, changing the lock ordering of lots of other bits of code is a non-starter, since then it will be incompatible with the way gfs2 has worked since it was created, and also incompatible with the vfs's own locking order that is used for local locks too.

Lets see if we can figure out a solution that will just address this particular issue on its own,

Steve.


Reply via email to