On Wed, 22 Jun 2005, Eli Billauer wrote:

> guy keren wrote:
>
> > I don't think it's the "disk gets full". i think its "the page-cache gets
> > full". try this: get a partition that is already quite full, and run the
> > test on it. you will not see this problem.
>
> Well, you may get other results if you test it, but what I saw was that
> if the partition was about to be full, I got one behaviour. Ran the same
> test after deleting some gigas of data from the partition, got something
> much better. Back and forth. This is how I reached the conclusion.

so it _could_ be that due to fragmentation, instead of writing a large set
of data consecutively, the system wrote this large set of data in
several write commands on different parts of the hard drive.

> The question I find appealing in this context is when the filesystem
> looks for free blocks. If it does it only by demand, this would explain
> what happens.

the file system contains a list of all free blocks. it looks for a free
block _from this list_ when there is a need for a new free block.
furthermore, it usually does not allocate a single block - rather, it
tries to pre-allocate several consecutive blocks, assuming they'll soon be
needed. it does this in order to avoid spreading the file all over the
disk.

-- 
guy

"For world domination - press 1,
 or dial 0, and please hold, for the creator." -- nob o. dy

--------------------------------------------------------------------------
Haifa Linux Club Mailing List (http://www.haifux.org)
To unsub send an empty message to [EMAIL PROTECTED]


Reply via email to