Hi,
On Wed, 9 Feb 2000 14:30:13 -0500 (EST), Alexander Viro
[EMAIL PROTECTED] said:
On Wed, 9 Feb 2000 [EMAIL PROTECTED] wrote:
with 2k blocks and 128 byte fragments, we get to really reduce wasted
space below any other system i've ever experienced.
Erm... I'm afraid that you are missing the point. You will get the
hardware sectors shared between the files. And you can't pass requess
smaller than that. _And_ you have to lock the bh when you do IO. Now,
estimate the fun with deadlocks...
That shoudn't matter. In the new VM it would be pretty trivial for the
filesystem to reserve a separate address_space against which to cache
fragment blocks. Populating that address_space when we want to read a
fragment block doesn't have to be any more complex than populating the
page cache already is. IO itself shouldn't be hard.
Yes, this will end up double-caching fragmented files to some extent,
since we'll have to reserve a separate, non-physically-mapped page for
the tail of a fragmented file.
Allocation/deallocation of fragments themselves obviously has to be done
very carefully, but we already have to deal with that sort of race in
the filesystem for normal allocations --- this isn't really any
different in principle.
--Stephen