On Fri, Sep 28, 2018 at 07:17:55AM -0400, Josef Bacik wrote:
> With severe fragmentation we can end up with our inode rsv size being
> huge during writeout, which would cause us to need to make very large
> metadata reservations.  However we may not actually need that much once
> writeout is complete.  So instead try to make our reservation, and if we
> couldn't make it re-calculate our new reservation size and try again.
> If our reservation size doesn't change between tries then we know we are
> actually out of space and can error out.
> 
> Signed-off-by: Josef Bacik <jo...@toxicpanda.com>
> ---
>  fs/btrfs/extent-tree.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index 7a53f6a29ebc..461b8076928b 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -5781,10 +5781,11 @@ static int btrfs_inode_rsv_refill(struct btrfs_inode 
> *inode,
>  {
>       struct btrfs_root *root = inode->root;
>       struct btrfs_block_rsv *block_rsv = &inode->block_rsv;
> -     u64 num_bytes = 0;
> +     u64 num_bytes = 0, last = 0;
>       u64 qgroup_num_bytes = 0;
>       int ret = -ENOSPC;
>  
> +again:

Could this be restructured so there's no new 'goto again;' pattern
added?

Reply via email to