On Wed, Apr 24, 2013 at 09:00:16AM -0400, Josef Bacik wrote:
> On Wed, Apr 24, 2013 at 02:57:40AM -0600, Liu Bo wrote:
> > On Tue, Apr 23, 2013 at 02:48:54PM -0400, Josef Bacik wrote:
> > > If we fail to load block groups halfway through we can leave 
> > > extent_state's on
> > > the excluded tree.  This is because we just lookup the supers and add 
> > > them to
> > > the excluded tree regardless of which block group we are looking at 
> > > currently.
> > > This is a problem because we remove the excluded extents for the range of 
> > > the
> > > block group only, so if we don't ever load a block group for one of the 
> > > excluded
> > > extents we won't ever free it.  This fixes the problem by only adding 
> > > excluded
> > > extents if it falls in the block group range we care about.  With this 
> > > patch
> > > we're no longer leaking space when we fail to read all of the block 
> > > groups.
> > > Thanks,
> > > 
> > > Signed-off-by: Josef Bacik <jba...@fusionio.com>
> > > ---
> > > V1->V2: fixed a slight problem where i should have been comparing to the 
> > > end of
> > > hte block group not the begining.
> > > 
> > >  fs/btrfs/extent-tree.c |   24 +++++++++++++++++++++---
> > >  1 files changed, 21 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> > > index b441be3..a81f689 100644
> > > --- a/fs/btrfs/extent-tree.c
> > > +++ b/fs/btrfs/extent-tree.c
> > > @@ -270,9 +270,27 @@ static int exclude_super_stripes(struct btrfs_root 
> > > *root,
> > >                   return ret;
> > >  
> > >           while (nr--) {
> > > -                 cache->bytes_super += stripe_len;
> > > -                 ret = add_excluded_extent(root, logical[nr],
> > > -                                           stripe_len);
> > > +                 u64 start, len;
> > > +
> > > +                 if (logical[nr] > cache->key.objectid +
> > > +                     cache->key.offset)
> > > +                         continue;
> > > +
> > > +                 if (logical[nr] + stripe_len <= cache->key.objectid)
> > > +                         continue;
> > 
> > hmm...I just doubt that these two cases can happen.
> > 
> > btrfs_rmap_block() ensures that logical[nr] will be larger than
> > cache->key.objectid.
> > 
> > Am I missing something?
> 
> Yeah, we can still get ranges that are past the end of the cache, just put a
> printk in there and you'll see it happen.  Now it's not likely that a logical
> will be less than the start but better safe than sorry.  Thanks,
> 

But if it's really past the end of the cache, there might be something wrong in
btrfs_rmap_block() IMO.

Ok, I'll dig it somehow.

thanks,
liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to