Hi,

On Thu, 2005-03-17 at 17:23, Stephen C. Tweedie wrote:

> I wrote a small program to calculate the total indirect tree overhead
> for any given file size, and 0x1ff7fffe000 turned out to be the largest
> file we can get without the total i_blocks overflowing 2^32.
> 
> But in testing, that *just* wrapped --- we need to limit the file to be
> one page smaller than that to deal with the possibility of an EA/ACL
> block being accounted against i_blocks.

On a side, issue, e2fsck was unable to find any problem on that
filesystem after the i_blocks had wrapped exactly to zero.

The bug seems to be in e2fsck/pass1.c: we do the numblocks checking
inside process_block(), which is called as an inode block iteration
function in check_blocks().  Then, later, check_blocks() does

        if (inode->i_file_acl && check_ext_attr(ctx, pctx, block_buf))
                pb.num_blocks++;

        pb.num_blocks *= (fs->blocksize / 512);

but without any further testing to see if pb.num_blocks has exceeded the
max_blocks.  So by the time we've got to the end of check_blocks(),
we're testing the wrapped i_blocks on disk against the wrapped
num_blocks in memory, and so e2fsck fails to notice anything wrong.

The fix may be as simple as just moving the

        if (inode->i_file_acl && check_ext_attr(ctx, pctx, block_buf))
                pb.num_blocks++;

earlier in the function; Ted, do you see any problems with that?

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to