Does not look you used the force option. Or, you ran with the file system mounted.
Umount the fs on all nodes and do: $ fsck.ocfs2 -f /dev/dm-1 Paulo Rodrigues wrote: > Hello Sunil, > > fsck says its clean: > > Checking OCFS2 filesystem in /dev/dm-1: > label: /var/lib/dovecot/spool > uuid: ab 1e ac 82 67 cb 47 58 81 07 2b 00 55 f6 09 36 > number of blocks: 246838717 > bytes per block: 4096 > number of clusters: 246838717 > bytes per cluster: 4096 > max slots: 4 > > o2fsck_should_replay_journals:564 | slot 0 JOURNAL_DIRTY_FL: 0 > o2fsck_should_replay_journals:564 | slot 1 JOURNAL_DIRTY_FL: 0 > o2fsck_should_replay_journals:564 | slot 2 JOURNAL_DIRTY_FL: 0 > o2fsck_should_replay_journals:564 | slot 3 JOURNAL_DIRTY_FL: 0 > /dev/dm-1 is clean. It will be checked after 20 additional mounts. > > I expected upgrading to 1.5.0 would fix it... What do you think? > > Many thanks, > Paulo > > This could suggest an on disk problem. Have you run fsck.ocfs2 > recently? > > > ------------------------------------------------------------------------ > > _______________________________________________ > Ocfs2-users mailing list > [email protected] > http://oss.oracle.com/mailman/listinfo/ocfs2-users _______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
