Incidentally, this is part of how QFS gets its performance
for streaming I/O. We use an "allocate forward" policy,
allow very largeallocation blocks, and separate the
metadata from data. This allows us to write (or read) data
in fairly large I/O requests, without unne
On Aug 11, 2006, at 12:38 PM, Jonathan Adams wrote:
The problem is that you don't know the actual *contents* of the
parent block
until *all* of its children have been written to their final
locations.
(This is because the block pointer's value depends on the final
location)
But I know whe
On Fri, Aug 11, 2006 at 11:04:06AM -0500, Anton Rang wrote:
> >Once the data blocks are on disk we have the information
> >necessary to update the indirect blocks iteratively up to
> >the ueberblock. Those are the smaller I/Os; I guess that
> >becauseof ditto blocks they go to phy
On Aug 9, 2006, at 8:18 AM, Roch wrote:
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
write that with a single operation. Th
On Tue, Anton B. Rang wrote:
> So while I'm feeling optimistic :-) we really ought to be able to do this in
> two I/O operations. If we have, say, 500K of data to write (including all of
> the metadata), we should be able to allocate a contiguous 500K block on disk
> and write that with a single
So while I'm feeling optimistic :-) we really ought to be able to do this in
two I/O operations. If we have, say, 500K of data to write (including all of
the metadata), we should be able to allocate a contiguous 500K block on disk
and write that with a single operation. Then we update the überbl