Re: [PATCH] Btrfs: cleanup to remove reduplicate code in iterate_extent_inode()

2013-03-30 Thread Wang Shilong

Hello,

 On 03/29/13 14:42, Wang Shilong wrote:
 From: Wang Shilong wangsl-f...@cn.fujitsu.com
 
 Just remove the unnecessary check and assignment.
 
 Signed-off-by: Wang Shilong wangsl-f...@cn.fujitsu.com
 ---
 fs/btrfs/backref.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
 
 diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
 index 3ca413bb..e102b48 100644
 --- a/fs/btrfs/backref.c
 +++ b/fs/btrfs/backref.c
 @@ -1499,7 +1499,7 @@ int iterate_extent_inodes(struct btrfs_fs_info 
 *fs_info,
  if (ret)
  break;
  ULIST_ITER_INIT(root_uiter);
 -while (!ret  (root_node = ulist_next(roots, root_uiter))) {
 +while ((root_node = ulist_next(roots, root_uiter))) {
 
 It doesn't look unnecessary at all to me. ret is set in the loop and
 only checked in the while condition.

Yeah, you are right..
 
  pr_debug(root %llu references leaf %llu, data list 
   %#llx\n, root_node-val, ref_node-val,
   (long long)ref_node-aux);
 @@ -1510,7 +1510,6 @@ int iterate_extent_inodes(struct btrfs_fs_info 
 *fs_info,
  iterate, ctx);
  }
  ulist_free(roots);
 -roots = NULL;
 
 roots gets freed again later on. If you don't set it to NULL, it will
 result in a double free.

If we are in the loop, 'roots' will be reallocated again, if relocation in the 
find_all_roots()
fails,  'roots' has been dealt in the find_all_roots(), and we have breaked out 
the loop.

I don't know where a double may happen? Am i missing something?

Thanks,
Wang

 
 -Arne
 
  }
 
  free_leaf_list(refs);
 
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: cleanup to remove reduplicate code in iterate_extent_inode()

2013-03-30 Thread Arne Jansen
On 03/30/13 12:55, Wang Shilong wrote:
 snip
 
 On 03/29/13 14:42, Wang Shilong wrote:
 From: Wang Shilong wangsl-f...@cn.fujitsu.com

 Just remove the unnecessary check and assignment.

 Signed-off-by: Wang Shilong wangsl-f...@cn.fujitsu.com
 ---
 fs/btrfs/backref.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

 diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
 index 3ca413bb..e102b48 100644
 --- a/fs/btrfs/backref.c
 +++ b/fs/btrfs/backref.c
 @@ -1499,7 +1499,7 @@ int iterate_extent_inodes(struct btrfs_fs_info 
 *fs_info,
 if (ret)
 break;
 ULIST_ITER_INIT(root_uiter);
 -   while (!ret  (root_node = ulist_next(roots, root_uiter))) {
 +   while ((root_node = ulist_next(roots, root_uiter))) {

 It doesn't look unnecessary at all to me. ret is set in the loop and
 only checked in the while condition.

 pr_debug(root %llu references leaf %llu, data list 
  %#llx\n, root_node-val, ref_node-val,
  (long long)ref_node-aux);
 @@ -1510,7 +1510,6 @@ int iterate_extent_inodes(struct btrfs_fs_info 
 *fs_info,
 iterate, ctx);
 }
 ulist_free(roots);
 -   roots = NULL;

 roots gets freed again later on. If you don't set it to NULL, it will
 result in a double free.
 
 Maybe you mean this?
 
 http://marc.info/?l=linux-btrfsm=136456233929528w=2
 ulist_free() here is unnecessary and may cause a double freeā€¦
 So we don't need to set it to NULL again..

Yeah, I haven't seen your other patch.

 
 
 

 -Arne

 }

 free_leaf_list(refs);


 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: limit the global reserve to 512mb

2013-03-30 Thread Roman Mamedov
On Tue, 26 Mar 2013 15:34:06 -0400
Josef Bacik jba...@fusionio.com wrote:

 A user reported a problem where he was getting early ENOSPC with hundreds of
 gigs of free data space and 6 gigs of free metadata space.  This is because 
 the
 global block reserve was taking up the entire free metadata space.  This is
 ridiculous, we have infrastructure in place to throttle if we start using too
 much of the global reserve, so instead of letting it get this huge just limit 
 it
 to 512mb so that users can still get work done.  This allowed the user to
 complete his rsync without issues.  Thanks

Just a note, I added this to 3.7.10 and was able to complete a balance on a
difficult filesystem (60GB in size, used for /home, with creation and
deletion of snapshots by cron), that was otherwise seeming to loop/hang
and continuously spewing -28 errors into dmesg.

 Cc: sta...@vger.kernel.org
 Reported-and-tested-by: Stefan Priebe s.pri...@profihost.ag
 Signed-off-by: Josef Bacik jba...@fusionio.com
 ---
  fs/btrfs/extent-tree.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)
 
 diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
 index c08c7c8..5791da2 100644
 --- a/fs/btrfs/extent-tree.c
 +++ b/fs/btrfs/extent-tree.c
 @@ -4554,7 +4554,7 @@ static void update_global_block_rsv(struct 
 btrfs_fs_info *fs_info)
   spin_lock(sinfo-lock);
   spin_lock(block_rsv-lock);
  
 - block_rsv-size = num_bytes;
 + block_rsv-size = min_t(u64, num_bytes, 512 * 1024 * 1024);
  
   num_bytes = sinfo-bytes_used + sinfo-bytes_pinned +
   sinfo-bytes_reserved + sinfo-bytes_readonly +


-- 
With respect,
Roman


signature.asc
Description: PGP signature


Re: [PATCH V2 1/6] Btrfs: introduce a mutex lock for btrfs quota operations

2013-03-30 Thread Arne Jansen
On 03/28/13 11:53, Wang Shilong wrote:
 From: Wang Shilong wangsl-f...@cn.fujitsu.com
 
 This patch introduces mutex lock 'quota_lock', and makes
 all the user change for quota protected by quota_lock.
 
 Signed-off-by: Wang Shilong wangsl-f...@cn.fujitsu.com
 Reviewed-by: Miao Xie mi...@cn.fujitsu.com
 ---
  fs/btrfs/ctree.h   |3 +++
  fs/btrfs/disk-io.c |1 +
  fs/btrfs/ioctl.c   |   16 
  3 files changed, 16 insertions(+), 4 deletions(-)
 
 diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
 index 6e81860..a11a8ed 100644
 --- a/fs/btrfs/ctree.h
 +++ b/fs/btrfs/ctree.h
 @@ -1584,6 +1584,9 @@ struct btrfs_fs_info {
   struct rb_root qgroup_tree;
   spinlock_t qgroup_lock;
  
 + /* protect user change operations for quota */
 + struct mutex quota_lock;
 +
   /* list of dirty qgroups to be written at next commit */
   struct list_head dirty_qgroups;
  
 diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
 index fe82d08..4552f14 100644
 --- a/fs/btrfs/disk-io.c
 +++ b/fs/btrfs/disk-io.c
 @@ -2250,6 +2250,7 @@ int open_ctree(struct super_block *sb,
   mutex_init(fs_info-dev_replace.lock);
  
   spin_lock_init(fs_info-qgroup_lock);
 + mutex_init(fs_info-quota_lock);
   fs_info-qgroup_tree = RB_ROOT;
   INIT_LIST_HEAD(fs_info-dirty_qgroups);
   fs_info-qgroup_seq = 1;
 diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
 index 222ce84..e2950f1 100644
 --- a/fs/btrfs/ioctl.c
 +++ b/fs/btrfs/ioctl.c
 @@ -752,7 +752,7 @@ static noinline int btrfs_mksubvol(struct path *parent,
  
   if (btrfs_root_refs(BTRFS_I(dir)-root-root_item) == 0)
   goto out_up_read;
 -
 + mutex_lock(BTRFS_I(dir)-root-fs_info-quota_lock);
   if (snap_src) {
   error = create_snapshot(snap_src, dir, dentry, name, namelen,
   async_transid, readonly, inherit);
 @@ -762,6 +762,7 @@ static noinline int btrfs_mksubvol(struct path *parent,
   }
   if (!error)
   fsnotify_mkdir(dir, dentry);
 + mutex_unlock(BTRFS_I(dir)-root-fs_info-quota_lock);

You are completely serializing subvolume operations here. I'd prefer if
you'd move the lock to a lower level to only protect the quota
operations. Can't you move the lock completely to qgroup.c?

  out_up_read:
   up_read(BTRFS_I(dir)-root-fs_info-subvol_sem);
  out_dput:
 @@ -3693,6 +3694,7 @@ static long btrfs_ioctl_quota_ctl(struct file *file, 
 void __user *arg)
   goto drop_write;
   }
  
 + mutex_lock(root-fs_info-quota_lock);
   down_read(root-fs_info-subvol_sem);
   if (sa-cmd != BTRFS_QUOTA_CTL_RESCAN) {
   trans = btrfs_start_transaction(root, 2);
 @@ -3728,6 +3730,7 @@ static long btrfs_ioctl_quota_ctl(struct file *file, 
 void __user *arg)
  out:
   kfree(sa);
   up_read(root-fs_info-subvol_sem);
 + mutex_unlock(root-fs_info-quota_lock);
  drop_write:
   mnt_drop_write_file(file);
   return ret;
 @@ -3754,6 +3757,7 @@ static long btrfs_ioctl_qgroup_assign(struct file 
 *file, void __user *arg)
   goto drop_write;
   }
  
 + mutex_lock(root-fs_info-quota_lock);
   trans = btrfs_join_transaction(root);
   if (IS_ERR(trans)) {
   ret = PTR_ERR(trans);
 @@ -3775,6 +3779,7 @@ static long btrfs_ioctl_qgroup_assign(struct file 
 *file, void __user *arg)
  
  out:
   kfree(sa);
 + mutex_unlock(root-fs_info-quota_lock);
  drop_write:
   mnt_drop_write_file(file);
   return ret;
 @@ -3805,11 +3810,11 @@ static long btrfs_ioctl_qgroup_create(struct file 
 *file, void __user *arg)
   ret = -EINVAL;
   goto out;
   }
 -
 + mutex_lock(root-fs_info-quota_lock);
   trans = btrfs_join_transaction(root);
   if (IS_ERR(trans)) {
   ret = PTR_ERR(trans);
 - goto out;
 + goto out_unlock;
   }
  
   /* FIXME: check if the IDs really exist */
 @@ -3824,6 +3829,8 @@ static long btrfs_ioctl_qgroup_create(struct file 
 *file, void __user *arg)
   if (err  !ret)
   ret = err;
  
 +out_unlock:
 + mutex_unlock(root-fs_info-quota_lock);
  out:
   kfree(sa);
  drop_write:
 @@ -3852,7 +3859,7 @@ static long btrfs_ioctl_qgroup_limit(struct file *file, 
 void __user *arg)
   ret = PTR_ERR(sa);
   goto drop_write;
   }
 -
 + mutex_lock(root-fs_info-quota_lock);
   trans = btrfs_join_transaction(root);
   if (IS_ERR(trans)) {
   ret = PTR_ERR(trans);
 @@ -3874,6 +3881,7 @@ static long btrfs_ioctl_qgroup_limit(struct file *file, 
 void __user *arg)
  
  out:
   kfree(sa);
 + mutex_unlock(root-fs_info-quota_lock);
  drop_write:
   mnt_drop_write_file(file);
   return ret;
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH V2 2/6] Btrfs: remove some unnecessary spin_lock usages

2013-03-30 Thread Arne Jansen
On 03/28/13 11:54, Wang Shilong wrote:
 From: Wang Shilong wangsl-f...@cn.fujitsu.com
 
 We use mutex_lock to protect all the user change operaions.
 So when we are calling find_qgroup_rb() to check whether
 qgroup exists, we don't have to hold spin_lock.
 
 Besides, when enabling/disabling quota,it must be single
 thread when operations come to here.Spin_lock must be fistly
 used to clear quota_root when disabling quota,while enabling
 quota spin_lock must be used to complete the last assign work.
 
 Signed-off-by: Wang Shilong wangsl-f...@cn.fujitsu.com
 Reviewed-by: Miao Xie mi...@cn.fujitsu.com
 ---
  fs/btrfs/qgroup.c |   42 +++---
  1 files changed, 15 insertions(+), 27 deletions(-)
 
 diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
 index e3598fa..7df372a 100644
 --- a/fs/btrfs/qgroup.c
 +++ b/fs/btrfs/qgroup.c
 @@ -42,7 +42,6 @@
   *  - limit
   *  - caches fuer ulists
   *  - performance benchmarks
 - *  - check all ioctl parameters
   */
  
  /*
 @@ -98,7 +97,11 @@ struct btrfs_qgroup_list {
   struct btrfs_qgroup *member;
  };
  
 -/* must be called with qgroup_lock held */

instead it must be called with quota_lock held. The rest looks
correct to me.

-Arne

 +/*
 + * don't need to be held by spin_lock since
 + * all the quota configurations on memory has been protected
 + * by mutex quota_lock.
 + */


  static struct btrfs_qgroup *find_qgroup_rb(struct btrfs_fs_info *fs_info,
  u64 qgroupid)
  {
 @@ -793,13 +796,10 @@ int btrfs_quota_enable(struct btrfs_trans_handle *trans,
   int ret = 0;
   int slot;
  
 - spin_lock(fs_info-qgroup_lock);
   if (fs_info-quota_root) {
   fs_info-pending_quota_state = 1;
 - spin_unlock(fs_info-qgroup_lock);
 - goto out;
 + return ret;
   }
 - spin_unlock(fs_info-qgroup_lock);
  
   /*
* initially create the quota tree
 @@ -808,7 +808,7 @@ int btrfs_quota_enable(struct btrfs_trans_handle *trans,
  BTRFS_QUOTA_TREE_OBJECTID);
   if (IS_ERR(quota_root)) {
   ret =  PTR_ERR(quota_root);
 - goto out;
 + return ret;
   }
  
   path = btrfs_alloc_path();
 @@ -861,14 +861,11 @@ int btrfs_quota_enable(struct btrfs_trans_handle *trans,
   if (ret)
   goto out_free_path;
  
 - spin_lock(fs_info-qgroup_lock);
   qgroup = add_qgroup_rb(fs_info, found_key.offset);
   if (IS_ERR(qgroup)) {
 - spin_unlock(fs_info-qgroup_lock);
   ret = PTR_ERR(qgroup);
   goto out_free_path;
   }
 - spin_unlock(fs_info-qgroup_lock);
   }
   ret = btrfs_next_item(tree_root, path);
   if (ret  0)
 @@ -883,13 +880,12 @@ out_add_root:
   if (ret)
   goto out_free_path;
  
 - spin_lock(fs_info-qgroup_lock);
   qgroup = add_qgroup_rb(fs_info, BTRFS_FS_TREE_OBJECTID);
   if (IS_ERR(qgroup)) {
 - spin_unlock(fs_info-qgroup_lock);
   ret = PTR_ERR(qgroup);
   goto out_free_path;
   }
 + spin_lock(fs_info-qgroup_lock);
   fs_info-quota_root = quota_root;
   fs_info-pending_quota_state = 1;
   spin_unlock(fs_info-qgroup_lock);
 @@ -901,7 +897,6 @@ out_free_root:
   free_extent_buffer(quota_root-commit_root);
   kfree(quota_root);
   }
 -out:
   return ret;
  }
  
 @@ -912,11 +907,10 @@ int btrfs_quota_disable(struct btrfs_trans_handle 
 *trans,
   struct btrfs_root *quota_root;
   int ret = 0;
  
 - spin_lock(fs_info-qgroup_lock);
 - if (!fs_info-quota_root) {
 - spin_unlock(fs_info-qgroup_lock);
 + if (!fs_info-quota_root)
   return 0;
 - }
 +
 + spin_lock(fs_info-qgroup_lock);
   fs_info-quota_enabled = 0;
   fs_info-pending_quota_state = 0;
   quota_root = fs_info-quota_root;
 @@ -1041,15 +1035,12 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle 
 *trans,
   return -EINVAL;
  
   /* check if there are no relations to this qgroup */
 - spin_lock(fs_info-qgroup_lock);
   qgroup = find_qgroup_rb(fs_info, qgroupid);
   if (qgroup) {
 - if (!list_empty(qgroup-groups) || 
 !list_empty(qgroup-members)) {
 - spin_unlock(fs_info-qgroup_lock);
 + if (!list_empty(qgroup-groups) ||
 + !list_empty(qgroup-members))
   return -EBUSY;
 - }
   }
 - spin_unlock(fs_info-qgroup_lock);
  
   ret = del_qgroup_item(trans, quota_root, qgroupid);
  
 @@ -1081,20 +1072,17 @@ int btrfs_limit_qgroup(struct btrfs_trans_handle 
 *trans,
  (unsigned long long)qgroupid);
   }
  
 - 

Re: [PATCH V2 1/6] Btrfs: introduce a mutex lock for btrfs quota operations

2013-03-30 Thread Wang Shilong
Hello Arne,

 On 03/28/13 11:53, Wang Shilong wrote:
 From: Wang Shilong wangsl-f...@cn.fujitsu.com
 
 This patch introduces mutex lock 'quota_lock', and makes
 all the user change for quota protected by quota_lock.
 
 Signed-off-by: Wang Shilong wangsl-f...@cn.fujitsu.com
 Reviewed-by: Miao Xie mi...@cn.fujitsu.com
 ---
 fs/btrfs/ctree.h   |3 +++
 fs/btrfs/disk-io.c |1 +
 fs/btrfs/ioctl.c   |   16 
 3 files changed, 16 insertions(+), 4 deletions(-)
 
 diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
 index 6e81860..a11a8ed 100644
 --- a/fs/btrfs/ctree.h
 +++ b/fs/btrfs/ctree.h
 @@ -1584,6 +1584,9 @@ struct btrfs_fs_info {
  struct rb_root qgroup_tree;
  spinlock_t qgroup_lock;
 
 +/* protect user change operations for quota */
 +struct mutex quota_lock;
 +
  /* list of dirty qgroups to be written at next commit */
  struct list_head dirty_qgroups;
 
 diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
 index fe82d08..4552f14 100644
 --- a/fs/btrfs/disk-io.c
 +++ b/fs/btrfs/disk-io.c
 @@ -2250,6 +2250,7 @@ int open_ctree(struct super_block *sb,
  mutex_init(fs_info-dev_replace.lock);
 
  spin_lock_init(fs_info-qgroup_lock);
 +mutex_init(fs_info-quota_lock);
  fs_info-qgroup_tree = RB_ROOT;
  INIT_LIST_HEAD(fs_info-dirty_qgroups);
  fs_info-qgroup_seq = 1;
 diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
 index 222ce84..e2950f1 100644
 --- a/fs/btrfs/ioctl.c
 +++ b/fs/btrfs/ioctl.c
 @@ -752,7 +752,7 @@ static noinline int btrfs_mksubvol(struct path *parent,
 
  if (btrfs_root_refs(BTRFS_I(dir)-root-root_item) == 0)
  goto out_up_read;
 -
 +mutex_lock(BTRFS_I(dir)-root-fs_info-quota_lock);
  if (snap_src) {
  error = create_snapshot(snap_src, dir, dentry, name, namelen,
  async_transid, readonly, inherit);
 @@ -762,6 +762,7 @@ static noinline int btrfs_mksubvol(struct path *parent,
  }
  if (!error)
  fsnotify_mkdir(dir, dentry);
 +mutex_unlock(BTRFS_I(dir)-root-fs_info-quota_lock);
 
 You are completely serializing subvolume operations here. I'd prefer if
 you'd move the lock to a lower level to only protect the quota
 operations. Can't you move the lock completely to qgroup.c?

I have thought about this.
It really should not start a transaction  if the checks fail.So i have all the 
necessary checks
before a transaction!

For  user quota change operations, i think it is ok to serialize the operations.
The only place may be create_snapshot()/create_subvolume(). Maybe i can move 
the mutex_lock
to the place where btrfs_qgroup_inherit() is  called.

Thanks,
wang
 
 
 out_up_read:
  up_read(BTRFS_I(dir)-root-fs_info-subvol_sem);
 out_dput:
 @@ -3693,6 +3694,7 @@ static long btrfs_ioctl_quota_ctl(struct file *file, 
 void __user *arg)
  goto drop_write;
  }
 
 +mutex_lock(root-fs_info-quota_lock);
  down_read(root-fs_info-subvol_sem);
  if (sa-cmd != BTRFS_QUOTA_CTL_RESCAN) {
  trans = btrfs_start_transaction(root, 2);
 @@ -3728,6 +3730,7 @@ static long btrfs_ioctl_quota_ctl(struct file *file, 
 void __user *arg)
 out:
  kfree(sa);
  up_read(root-fs_info-subvol_sem);
 +mutex_unlock(root-fs_info-quota_lock);
 drop_write:
  mnt_drop_write_file(file);
  return ret;
 @@ -3754,6 +3757,7 @@ static long btrfs_ioctl_qgroup_assign(struct file 
 *file, void __user *arg)
  goto drop_write;
  }
 
 +mutex_lock(root-fs_info-quota_lock);
  trans = btrfs_join_transaction(root);
  if (IS_ERR(trans)) {
  ret = PTR_ERR(trans);
 @@ -3775,6 +3779,7 @@ static long btrfs_ioctl_qgroup_assign(struct file 
 *file, void __user *arg)
 
 out:
  kfree(sa);
 +mutex_unlock(root-fs_info-quota_lock);
 drop_write:
  mnt_drop_write_file(file);
  return ret;
 @@ -3805,11 +3810,11 @@ static long btrfs_ioctl_qgroup_create(struct file 
 *file, void __user *arg)
  ret = -EINVAL;
  goto out;
  }
 -
 +mutex_lock(root-fs_info-quota_lock);
  trans = btrfs_join_transaction(root);
  if (IS_ERR(trans)) {
  ret = PTR_ERR(trans);
 -goto out;
 +goto out_unlock;
  }
 
  /* FIXME: check if the IDs really exist */
 @@ -3824,6 +3829,8 @@ static long btrfs_ioctl_qgroup_create(struct file 
 *file, void __user *arg)
  if (err  !ret)
  ret = err;
 
 +out_unlock:
 +mutex_unlock(root-fs_info-quota_lock);
 out:
  kfree(sa);
 drop_write:
 @@ -3852,7 +3859,7 @@ static long btrfs_ioctl_qgroup_limit(struct file 
 *file, void __user *arg)
  ret = PTR_ERR(sa);
  goto drop_write;
  }
 -
 +mutex_lock(root-fs_info-quota_lock);
  trans = btrfs_join_transaction(root);
  if (IS_ERR(trans)) {
  ret = PTR_ERR(trans);
 @@ -3874,6 +3881,7 @@ static long btrfs_ioctl_qgroup_limit(struct file 
 *file, void __user *arg)
 
 out:
  kfree(sa);
 +