Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-06-06 Thread Omar Sandoval
On Mon, Jun 05, 2017 at 09:29:47PM -0700, Liu Bo wrote:
> On Fri, Jun 02, 2017 at 11:14:13AM -0700, Omar Sandoval wrote:
> > On Fri, May 19, 2017 at 11:39:15AM -0600, Liu Bo wrote:
> > > We commit transaction in order to reclaim space from pinned bytes because
> > > it could process delayed refs, and in may_commit_transaction(), we check
> > > first if pinned bytes are enough for the required space, we then check if
> > > that plus bytes reserved for delayed insert are enough for the required
> > > space.
> > > 
> > > This changes the code to the above logic.
> > > 
> > > Signed-off-by: Liu Bo 
> > > ---
> > >  fs/btrfs/extent-tree.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > > index e390451c72e6..bded1ddd1bb6 100644
> > > --- a/fs/btrfs/extent-tree.c
> > > +++ b/fs/btrfs/extent-tree.c
> > > @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct 
> > > btrfs_fs_info *fs_info,
> > >  
> > >   spin_lock(&delayed_rsv->lock);
> > >   if (percpu_counter_compare(&space_info->total_bytes_pinned,
> > > -bytes - delayed_rsv->size) >= 0) {
> > > +bytes - delayed_rsv->size) < 0) {
> > >   spin_unlock(&delayed_rsv->lock);
> > >   return -ENOSPC;
> > >   }
> > 
> > I found this bug in my latest enospc investigation, too. However, the
> > total_bytes_pinned counter is pretty broken. E.g., on my laptop:
> > 
> > $ sync; grep -H '' /sys/fs/btrfs/*/allocation/*/total_bytes_pinned
> > /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/data/total_bytes_pinned:48693501952
> > /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/metadata/total_bytes_pinned:-258146304
> > /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/system/total_bytes_pinned:0
> > 
> > I have a patch to fix it that I haven't cleaned up yet, below. Without
> > it, Bo's fix will probably cause early ENOSPCs. Dave, should we pull
> > Bo's patch out of for-next? In any case, I'll get my fix sent out.
> >
> 
> When data delayed refs are not merged, __btrfs_free_extent() will dec
> the pinned as expected, but when they got merged, %pinned bytes only
> got inc'd but not dec'd because the delayed ref has been deleted due
> to ref_count == 0.
> 
> This doesn't happen to tree block ref since tree block ref doesn't
> have more than one ref, so ref_mod is either 1 or -1, and tree block
> ref gets removed via btrfs_free_tree_block(), which also needs to be
> changed.
> 
> Based on that, I think we can solve the problem by touching
> btrfs_free_tree_block() and data delayed ref instead of head ref.
> 
> thanks,
> -liubo

I just sent out the fixes split up into separate patches, so hopefully
the different bugs should be more clear.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-06-05 Thread Liu Bo
On Fri, Jun 02, 2017 at 11:14:13AM -0700, Omar Sandoval wrote:
> On Fri, May 19, 2017 at 11:39:15AM -0600, Liu Bo wrote:
> > We commit transaction in order to reclaim space from pinned bytes because
> > it could process delayed refs, and in may_commit_transaction(), we check
> > first if pinned bytes are enough for the required space, we then check if
> > that plus bytes reserved for delayed insert are enough for the required
> > space.
> > 
> > This changes the code to the above logic.
> > 
> > Signed-off-by: Liu Bo 
> > ---
> >  fs/btrfs/extent-tree.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index e390451c72e6..bded1ddd1bb6 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct 
> > btrfs_fs_info *fs_info,
> >  
> > spin_lock(&delayed_rsv->lock);
> > if (percpu_counter_compare(&space_info->total_bytes_pinned,
> > -  bytes - delayed_rsv->size) >= 0) {
> > +  bytes - delayed_rsv->size) < 0) {
> > spin_unlock(&delayed_rsv->lock);
> > return -ENOSPC;
> > }
> 
> I found this bug in my latest enospc investigation, too. However, the
> total_bytes_pinned counter is pretty broken. E.g., on my laptop:
> 
> $ sync; grep -H '' /sys/fs/btrfs/*/allocation/*/total_bytes_pinned
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/data/total_bytes_pinned:48693501952
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/metadata/total_bytes_pinned:-258146304
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/system/total_bytes_pinned:0
> 
> I have a patch to fix it that I haven't cleaned up yet, below. Without
> it, Bo's fix will probably cause early ENOSPCs. Dave, should we pull
> Bo's patch out of for-next? In any case, I'll get my fix sent out.
>

When data delayed refs are not merged, __btrfs_free_extent() will dec
the pinned as expected, but when they got merged, %pinned bytes only
got inc'd but not dec'd because the delayed ref has been deleted due
to ref_count == 0.

This doesn't happen to tree block ref since tree block ref doesn't
have more than one ref, so ref_mod is either 1 or -1, and tree block
ref gets removed via btrfs_free_tree_block(), which also needs to be
changed.

Based on that, I think we can solve the problem by touching
btrfs_free_tree_block() and data delayed ref instead of head ref.

thanks,
-liubo

> diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
> index be70d90dfee5..93ffa898df6d 100644
> --- a/fs/btrfs/delayed-ref.c
> +++ b/fs/btrfs/delayed-ref.c
> @@ -470,7 +470,8 @@ add_delayed_ref_tail_merge(struct btrfs_trans_handle 
> *trans,
>  static noinline void
>  update_existing_head_ref(struct btrfs_delayed_ref_root *delayed_refs,
>struct btrfs_delayed_ref_node *existing,
> -  struct btrfs_delayed_ref_node *update)
> +  struct btrfs_delayed_ref_node *update,
> +  int *old_ref_mod_ret)
>  {
>   struct btrfs_delayed_ref_head *existing_ref;
>   struct btrfs_delayed_ref_head *ref;
> @@ -523,6 +524,8 @@ update_existing_head_ref(struct btrfs_delayed_ref_root 
> *delayed_refs,
>* currently, for refs we just added we know we're a-ok.
>*/
>   old_ref_mod = existing_ref->total_ref_mod;
> + if (old_ref_mod_ret)
> + *old_ref_mod_ret = old_ref_mod;
>   existing->ref_mod += update->ref_mod;
>   existing_ref->total_ref_mod += update->ref_mod;
>  
> @@ -550,7 +553,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
>struct btrfs_delayed_ref_node *ref,
>struct btrfs_qgroup_extent_record *qrecord,
>u64 bytenr, u64 num_bytes, u64 ref_root, u64 reserved,
> -  int action, int is_data, int *qrecord_inserted_ret)
> +  int action, int is_data, int *qrecord_inserted_ret,
> +  int *old_ref_mod, int *new_ref_mod)
>  {
>   struct btrfs_delayed_ref_head *existing;
>   struct btrfs_delayed_ref_head *head_ref = NULL;
> @@ -638,7 +642,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
>   if (existing) {
>   WARN_ON(ref_root && reserved && existing->qgroup_ref_root
>   && existing->qgroup_reserved);
> - update_existing_head_ref(delayed_refs, &existing->node, ref);
> + update_existing_head_ref(delayed_refs, &existing->node, ref,
> +  old_ref_mod);
>   /*
>* we've updated the existing ref, free the newly
>* allocated ref
> @@ -646,6 +651,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
>   kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref);
>   head_ref = existing;
>   } else {
> +

Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-06-05 Thread Liu Bo
On Fri, Jun 02, 2017 at 11:14:13AM -0700, Omar Sandoval wrote:
> On Fri, May 19, 2017 at 11:39:15AM -0600, Liu Bo wrote:
> > We commit transaction in order to reclaim space from pinned bytes because
> > it could process delayed refs, and in may_commit_transaction(), we check
> > first if pinned bytes are enough for the required space, we then check if
> > that plus bytes reserved for delayed insert are enough for the required
> > space.
> > 
> > This changes the code to the above logic.
> > 
> > Signed-off-by: Liu Bo 
> > ---
> >  fs/btrfs/extent-tree.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index e390451c72e6..bded1ddd1bb6 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct 
> > btrfs_fs_info *fs_info,
> >  
> > spin_lock(&delayed_rsv->lock);
> > if (percpu_counter_compare(&space_info->total_bytes_pinned,
> > -  bytes - delayed_rsv->size) >= 0) {
> > +  bytes - delayed_rsv->size) < 0) {
> > spin_unlock(&delayed_rsv->lock);
> > return -ENOSPC;
> > }
> 
> I found this bug in my latest enospc investigation, too. However, the
> total_bytes_pinned counter is pretty broken. E.g., on my laptop:
> 
> $ sync; grep -H '' /sys/fs/btrfs/*/allocation/*/total_bytes_pinned
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/data/total_bytes_pinned:48693501952
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/metadata/total_bytes_pinned:-258146304
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/system/total_bytes_pinned:0
> 
> I have a patch to fix it that I haven't cleaned up yet, below. Without
> it, Bo's fix will probably cause early ENOSPCs. Dave, should we pull
> Bo's patch out of for-next? In any case, I'll get my fix sent out.
>

While it does result in early ENOSPC, I don't think it's that fix's
problem, but because we don't precisely track this %pinned counter.

It has shown metadata's %total_bytes_pinned that is negative, have you
seen a negative counter for data?

The problem I observed is btrfs_free_tree_block(), which doesn't add
pinned bytes when it's not the last reference, while
__btrfs_free_extent() dec pinned bytes when it's not the last ref.  (I
have made a simple reproducer.)

But for data, looks like we could over-add pinned bytes if a removed
ref got re-added later, I'll try to make a test for this case.

> diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
> index be70d90dfee5..93ffa898df6d 100644
> --- a/fs/btrfs/delayed-ref.c
> +++ b/fs/btrfs/delayed-ref.c
> @@ -470,7 +470,8 @@ add_delayed_ref_tail_merge(struct btrfs_trans_handle 
> *trans,
>  static noinline void
>  update_existing_head_ref(struct btrfs_delayed_ref_root *delayed_refs,
>struct btrfs_delayed_ref_node *existing,
> -  struct btrfs_delayed_ref_node *update)
> +  struct btrfs_delayed_ref_node *update,
> +  int *old_ref_mod_ret)

The head ref doesn't seem related to the negative thing, we dec the
counter when running delayed refs, not head ref.

>  {
>   struct btrfs_delayed_ref_head *existing_ref;
>   struct btrfs_delayed_ref_head *ref;
> @@ -523,6 +524,8 @@ update_existing_head_ref(struct btrfs_delayed_ref_root 
> *delayed_refs,
>* currently, for refs we just added we know we're a-ok.
>*/
>   old_ref_mod = existing_ref->total_ref_mod;
> + if (old_ref_mod_ret)
> + *old_ref_mod_ret = old_ref_mod;
>   existing->ref_mod += update->ref_mod;
>   existing_ref->total_ref_mod += update->ref_mod;
>  
> @@ -550,7 +553,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
>struct btrfs_delayed_ref_node *ref,
>struct btrfs_qgroup_extent_record *qrecord,
>u64 bytenr, u64 num_bytes, u64 ref_root, u64 reserved,
> -  int action, int is_data, int *qrecord_inserted_ret)
> +  int action, int is_data, int *qrecord_inserted_ret,
> +  int *old_ref_mod, int *new_ref_mod)
>  {
>   struct btrfs_delayed_ref_head *existing;
>   struct btrfs_delayed_ref_head *head_ref = NULL;
> @@ -638,7 +642,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
>   if (existing) {
>   WARN_ON(ref_root && reserved && existing->qgroup_ref_root
>   && existing->qgroup_reserved);
> - update_existing_head_ref(delayed_refs, &existing->node, ref);
> + update_existing_head_ref(delayed_refs, &existing->node, ref,
> +  old_ref_mod);
>   /*
>* we've updated the existing ref, free the newly
>* allocated ref
> @@ -646,6 +651,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,

Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-06-03 Thread Holger Hoffstätte
On 06/02/17 20:14, Omar Sandoval wrote:
> On Fri, May 19, 2017 at 11:39:15AM -0600, Liu Bo wrote:
>> We commit transaction in order to reclaim space from pinned bytes because
>> it could process delayed refs, and in may_commit_transaction(), we check
>> first if pinned bytes are enough for the required space, we then check if
>> that plus bytes reserved for delayed insert are enough for the required
>> space.
>>
>> This changes the code to the above logic.
>>
>> Signed-off-by: Liu Bo 
>> ---
>>  fs/btrfs/extent-tree.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
>> index e390451c72e6..bded1ddd1bb6 100644
>> --- a/fs/btrfs/extent-tree.c
>> +++ b/fs/btrfs/extent-tree.c
>> @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct btrfs_fs_info 
>> *fs_info,
>>  
>>  spin_lock(&delayed_rsv->lock);
>>  if (percpu_counter_compare(&space_info->total_bytes_pinned,
>> -   bytes - delayed_rsv->size) >= 0) {
>> +   bytes - delayed_rsv->size) < 0) {
>>  spin_unlock(&delayed_rsv->lock);
>>  return -ENOSPC;
>>  }
> 
> I found this bug in my latest enospc investigation, too. However, the
> total_bytes_pinned counter is pretty broken. E.g., on my laptop:
> 
> $ sync; grep -H '' /sys/fs/btrfs/*/allocation/*/total_bytes_pinned
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/data/total_bytes_pinned:48693501952
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/metadata/total_bytes_pinned:-258146304
> /sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/system/total_bytes_pinned:0
> 
> I have a patch to fix it that I haven't cleaned up yet, below. Without
> it, Bo's fix will probably cause early ENOSPCs. Dave, should we pull
> Bo's patch out of for-next? In any case, I'll get my fix sent out.
[..patch snipped..]

This made me curious since I also found the underflowed metadata counter
on my system. I tried to reproduce it after un/remount (to reset the counters)
and noticed that I could reliably cause the metadata underflow by defragging
a few large subvolumes, which apparently creates enough extent tree movement
that the counter quickly goes bananas. It took some backporting, but with your
patch applied I can defrag away and have so far not seen a single counter
underflow; all of data/metadata/system are always positive after writes and
then reliably drop to 0 after sync/commit. Nice!
Just thought you'd want to know.

Holger
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-06-02 Thread Omar Sandoval
On Fri, May 19, 2017 at 11:39:15AM -0600, Liu Bo wrote:
> We commit transaction in order to reclaim space from pinned bytes because
> it could process delayed refs, and in may_commit_transaction(), we check
> first if pinned bytes are enough for the required space, we then check if
> that plus bytes reserved for delayed insert are enough for the required
> space.
> 
> This changes the code to the above logic.
> 
> Signed-off-by: Liu Bo 
> ---
>  fs/btrfs/extent-tree.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index e390451c72e6..bded1ddd1bb6 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct btrfs_fs_info 
> *fs_info,
>  
>   spin_lock(&delayed_rsv->lock);
>   if (percpu_counter_compare(&space_info->total_bytes_pinned,
> -bytes - delayed_rsv->size) >= 0) {
> +bytes - delayed_rsv->size) < 0) {
>   spin_unlock(&delayed_rsv->lock);
>   return -ENOSPC;
>   }

I found this bug in my latest enospc investigation, too. However, the
total_bytes_pinned counter is pretty broken. E.g., on my laptop:

$ sync; grep -H '' /sys/fs/btrfs/*/allocation/*/total_bytes_pinned
/sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/data/total_bytes_pinned:48693501952
/sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/metadata/total_bytes_pinned:-258146304
/sys/fs/btrfs/f31cc926-37d3-442d-b50a-10c62d47badc/allocation/system/total_bytes_pinned:0

I have a patch to fix it that I haven't cleaned up yet, below. Without
it, Bo's fix will probably cause early ENOSPCs. Dave, should we pull
Bo's patch out of for-next? In any case, I'll get my fix sent out.

diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index be70d90dfee5..93ffa898df6d 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -470,7 +470,8 @@ add_delayed_ref_tail_merge(struct btrfs_trans_handle *trans,
 static noinline void
 update_existing_head_ref(struct btrfs_delayed_ref_root *delayed_refs,
 struct btrfs_delayed_ref_node *existing,
-struct btrfs_delayed_ref_node *update)
+struct btrfs_delayed_ref_node *update,
+int *old_ref_mod_ret)
 {
struct btrfs_delayed_ref_head *existing_ref;
struct btrfs_delayed_ref_head *ref;
@@ -523,6 +524,8 @@ update_existing_head_ref(struct btrfs_delayed_ref_root 
*delayed_refs,
 * currently, for refs we just added we know we're a-ok.
 */
old_ref_mod = existing_ref->total_ref_mod;
+   if (old_ref_mod_ret)
+   *old_ref_mod_ret = old_ref_mod;
existing->ref_mod += update->ref_mod;
existing_ref->total_ref_mod += update->ref_mod;
 
@@ -550,7 +553,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
 struct btrfs_delayed_ref_node *ref,
 struct btrfs_qgroup_extent_record *qrecord,
 u64 bytenr, u64 num_bytes, u64 ref_root, u64 reserved,
-int action, int is_data, int *qrecord_inserted_ret)
+int action, int is_data, int *qrecord_inserted_ret,
+int *old_ref_mod, int *new_ref_mod)
 {
struct btrfs_delayed_ref_head *existing;
struct btrfs_delayed_ref_head *head_ref = NULL;
@@ -638,7 +642,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
if (existing) {
WARN_ON(ref_root && reserved && existing->qgroup_ref_root
&& existing->qgroup_reserved);
-   update_existing_head_ref(delayed_refs, &existing->node, ref);
+   update_existing_head_ref(delayed_refs, &existing->node, ref,
+old_ref_mod);
/*
 * we've updated the existing ref, free the newly
 * allocated ref
@@ -646,6 +651,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref);
head_ref = existing;
} else {
+   if (old_ref_mod)
+   *old_ref_mod = 0;
if (is_data && count_mod < 0)
delayed_refs->pending_csums += num_bytes;
delayed_refs->num_heads++;
@@ -655,6 +662,8 @@ add_delayed_ref_head(struct btrfs_fs_info *fs_info,
}
if (qrecord_inserted_ret)
*qrecord_inserted_ret = qrecord_inserted;
+   if (new_ref_mod)
+   *new_ref_mod = head_ref->total_ref_mod;
return head_ref;
 }
 
@@ -778,7 +787,8 @@ int btrfs_add_delayed_tree_ref(struct btrfs_fs_info 
*fs_info,
   struct btrfs_trans_handle *trans,
   u64 bytenr, u64 num_bytes, u64 parent,

Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-05-25 Thread Liu Bo
On Thu, May 25, 2017 at 06:50:48PM +0200, David Sterba wrote:
> On Tue, May 23, 2017 at 12:06:40PM +0300, Nikolay Borisov wrote:
> > 
> > 
> > On 19.05.2017 20:39, Liu Bo wrote:
> > > We commit transaction in order to reclaim space from pinned bytes because
> > > it could process delayed refs, and in may_commit_transaction(), we check
> > > first if pinned bytes are enough for the required space, we then check if
> > > that plus bytes reserved for delayed insert are enough for the required
> > > space.
> > > 
> > > This changes the code to the above logic.
> > > 
> > > Signed-off-by: Liu Bo 
> > 
> > Please add:
> > Fixes: b150a4f10d87 ("Btrfs: use a percpu to keep track of possibly
> > pinned bytes")
> > 
> > > ---
> > >  fs/btrfs/extent-tree.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > > index e390451c72e6..bded1ddd1bb6 100644
> > > --- a/fs/btrfs/extent-tree.c
> > > +++ b/fs/btrfs/extent-tree.c
> > > @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct 
> > > btrfs_fs_info *fs_info,
> > >  
> > >   spin_lock(&delayed_rsv->lock);
> > >   if (percpu_counter_compare(&space_info->total_bytes_pinned,
> > > -bytes - delayed_rsv->size) >= 0) {
> > > +bytes - delayed_rsv->size) < 0) {
> > >   spin_unlock(&delayed_rsv->lock);
> > >   return -ENOSPC;
> > >   }
> > > 
> > 
> > With the minor nit above:
> > 
> > Reviewed-by: Nikolay Borisov 
> > Tested-by: Nikolay Borisov 
> 
> Patch applied with updated tags.

Thank you for that!

-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-05-25 Thread David Sterba
On Tue, May 23, 2017 at 12:06:40PM +0300, Nikolay Borisov wrote:
> 
> 
> On 19.05.2017 20:39, Liu Bo wrote:
> > We commit transaction in order to reclaim space from pinned bytes because
> > it could process delayed refs, and in may_commit_transaction(), we check
> > first if pinned bytes are enough for the required space, we then check if
> > that plus bytes reserved for delayed insert are enough for the required
> > space.
> > 
> > This changes the code to the above logic.
> > 
> > Signed-off-by: Liu Bo 
> 
> Please add:
> Fixes: b150a4f10d87 ("Btrfs: use a percpu to keep track of possibly
> pinned bytes")
> 
> > ---
> >  fs/btrfs/extent-tree.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > index e390451c72e6..bded1ddd1bb6 100644
> > --- a/fs/btrfs/extent-tree.c
> > +++ b/fs/btrfs/extent-tree.c
> > @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct 
> > btrfs_fs_info *fs_info,
> >  
> > spin_lock(&delayed_rsv->lock);
> > if (percpu_counter_compare(&space_info->total_bytes_pinned,
> > -  bytes - delayed_rsv->size) >= 0) {
> > +  bytes - delayed_rsv->size) < 0) {
> > spin_unlock(&delayed_rsv->lock);
> > return -ENOSPC;
> > }
> > 
> 
> With the minor nit above:
> 
> Reviewed-by: Nikolay Borisov 
> Tested-by: Nikolay Borisov 

Patch applied with updated tags.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-05-23 Thread Nikolay Borisov


On 19.05.2017 20:39, Liu Bo wrote:
> We commit transaction in order to reclaim space from pinned bytes because
> it could process delayed refs, and in may_commit_transaction(), we check
> first if pinned bytes are enough for the required space, we then check if
> that plus bytes reserved for delayed insert are enough for the required
> space.
> 
> This changes the code to the above logic.
> 
> Signed-off-by: Liu Bo 

Please add:
Fixes: b150a4f10d87 ("Btrfs: use a percpu to keep track of possibly
pinned bytes")

> ---
>  fs/btrfs/extent-tree.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index e390451c72e6..bded1ddd1bb6 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct btrfs_fs_info 
> *fs_info,
>  
>   spin_lock(&delayed_rsv->lock);
>   if (percpu_counter_compare(&space_info->total_bytes_pinned,
> -bytes - delayed_rsv->size) >= 0) {
> +bytes - delayed_rsv->size) < 0) {
>   spin_unlock(&delayed_rsv->lock);
>   return -ENOSPC;
>   }
> 

With the minor nit above:

Reviewed-by: Nikolay Borisov 
Tested-by: Nikolay Borisov 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Btrfs: skip commit transaction if we don't have enough pinned bytes

2017-05-19 Thread Liu Bo
We commit transaction in order to reclaim space from pinned bytes because
it could process delayed refs, and in may_commit_transaction(), we check
first if pinned bytes are enough for the required space, we then check if
that plus bytes reserved for delayed insert are enough for the required
space.

This changes the code to the above logic.

Signed-off-by: Liu Bo 
---
 fs/btrfs/extent-tree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index e390451c72e6..bded1ddd1bb6 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -4837,7 +4837,7 @@ static int may_commit_transaction(struct btrfs_fs_info 
*fs_info,
 
spin_lock(&delayed_rsv->lock);
if (percpu_counter_compare(&space_info->total_bytes_pinned,
-  bytes - delayed_rsv->size) >= 0) {
+  bytes - delayed_rsv->size) < 0) {
spin_unlock(&delayed_rsv->lock);
return -ENOSPC;
}
-- 
2.13.0

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html