On Mon, 2003-09-08 at 11:08, Erwin Burgstaller wrote:
> I actually found out by writing a 700MB file to a file system with 1GB
> of size, removing it and again copying that file into that file system,
> where copy then aborted with only 300 MB written. But one could do that
> on small file systems (e.g. /boot) very fast and just triggering by time
> could fail then too.

Interesting.  Maybe I could set a flag if there are pending deletes, and
any new allocation attempts would force the journal buffer to disk if
that flag is set.

We used to start I/O on partial journal pages any time there was no
active I/O to the journal, but we saw a significant performance gain
when we deferred those writes until the page was full, or at least until
a synchronous transaction forced us to write it.  I'd like to fix this
problem without introducing a lot of unnecessary I/O.

Thanks,
Shaggy
> 
> Erwin

-- 
David Kleikamp
IBM Linux Technology Center

_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to