Dave Kleikamp wrote:
> 
> ...
> Andrew,  locking the pages was not the right thing to do, but I did it
> to prevent a race condition that caused the directory inode's address
> space to go away before its metapages were written.  The problem is
> that some of the metadata pages are mapped to directory inodes, and I
> couldn't find a better way to make the flushing and clearing of the
> inode wait on the transaction holding the metadata in memory.  The
> easiest way to reproduce the race is to unmount the partition
> immediately after creating a new entry in the root directory of the
> volume.
> 
> I have intended to revisit the problem, but I haven't gotten there yet.
> 
> Locking the pages also prevents JFS from working with a page size of
> greater than 4K, since more than on metapage may try to map to the same
> page.
> 

Thanks, Dave.

I guess all this stuff is still a couple of weeks away from
readiness-for-inclusion.

I'll do a first send of the core writeback changes tonight.
JFS works OK with that code (well, it did in 2.5.8), but
I suspect that it's subtly broken.  I'm no longer adding
dirty buffers to inode.i_dirty_data_buffers, so the
fsync_dirty_data_buffers() call won't actually write anything.
I haven't tested recovery - probably this is broken by the
change?

Unlocking the metapage pages will fix all this up - the
filemap_fdatasync() will do its work.

So if it's OK with you, I'll go ahead with the core writeback
patch (possibly breaking recovery) and we work on the metapage
writeback changes separately.
_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to