JFS does rely on the order of the I/O being written to disk. In practice, I don't know how likely you are to have a problem with it. Since the journal pages are written sequentially, I would expect that most of the time, the order will be preserved. Also dirty metadata pages are not written until all of the journal pages associated with the change have been written, but these pages are not usually written immediately, so in most cases the journal records will probably precede the actually metadata to disk.
The most likely place for error is when we update the "sync point" in the journal. The sync point marks the most recent transaction that still has outstanding I/O. If we move the sync point past transactions that we believe have been completely written to disk, but are still in the disk's cache, fsck will not replay those transactions when the system is rebooted. Of course, I have no real data to back up my how likely problems are to occur. This is just off the top of my head. In short, JFS cannot guarantee the filesystem integrity when a disk's write cache is on. W. Wilson Ho wrote: > Hi all, > > Can anyone tell me if JFS correctly handle the hard disk's "write > cache" reordering? Basically, if the disk's write cache is on, and it > shuffles the order where the journal blocks are being flushed to the > disk, AND if the power is cut off in the middle of this, the journal log > will be out of sequence. > > Can JFS correctly handle this case and only replay that part of the > journal log that is valid? > > Thanks! > > Wilson Ho -- David Kleikamp IBM Linux Technology Center _______________________________________________ Jfs-discussion mailing list [EMAIL PROTECTED] http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion
