On Thu, 14 Jun 2007 19:04:27 -0700 (PDT) Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > > Of course there is. The seeks are reduced since there are an factor > > > of 16 less metadata blocks. fsck does not read files. It just reads > > > metadata structures. And the larger contiguous areas the faster. > > > > Some metadata is contiguous: inode tables, some directories (if they got > > lucky), bitmap tables. But fsck surely reads them in a single swoop > > anyway, so there's no gain there. > > The metadata needs to refer to 1/16th of the earlier pages that need to be > tracked. metadata is shrunk significantly. Only if the filesystems are altered to use larger blocksizes and if the operator then chooses to use that feature. Then they suck for small-sized (and even medium-sized) files. So you're still talking about corner cases: specialised applications which require careful setup and administrator intervention. What can we do to optimise the common case? > > Other metadata (indirect blocks) are 100% discontiguous, and reading those > > with a 64k IO into 64k of memory is completely dumb. > > The effect of a larger page size is that the filesystem will > place more meta data into a single page instead of spreading it out. > Reading a mass of meta data with a 64k read is an intelligent choice to > make in particular if there is a large series of such reads. Again: requires larger blocksize: specialised, uninteresting for what will remain the common case: 4k blocksize. The alleged fsck benefit is also unrelated to variable PAGE_CACHE_SIZE. It's a feature of larger (unweildy?) blocksize, and xfs already has that working (doesn't it?) There may be some benefits to some future version of ext4. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/