> Anyone got a quick answer on this: > During install (7.1 standard) what actual version of mke2fs is run? Is > it the same as what gets installed or is it an older one in an > image file somewhere? > I pretty sure there is a bug in it, but before reporting I thought I > better check which version is run during the install. > > The reason is: > I have two computers that I setup about 3 months ago (RH7.1) that > are exactly the same except CPU (one is a PIII 933 and the other is a > Celeron 700) > > df of the local partitions is: > (I'm clearing out disk space as soon as I send this email :-) > > Filesystem 1k-blocks Used Available Use% Mounted on > /dev/hda5 18034128 15118180 1999844 89% / > /dev/hda2 54447 3489 48147 7% /boot > /dev/hda1 1027856 7088 1020768 1% /dos > > Filesystem 1k-blocks Used Available Use% Mounted on > /dev/hda5 18034128 16678668 439356 98% / > /dev/hda2 54447 3489 48147 7% /boot > /dev/hda1 1027856 6976 1020880 1% /dos > > So they are both partitioned exactly the same and both give exactly the > same error block on a "badblocks -v /dev/hda5" (2nd last block) > > Checking for bad blocks in read-only mode >>From block 0 to 18322101 > 18322100 > Pass completed, 1 bad blocks found. > > I guess this is some sort of end condition bug.
Hmmm, my suspicions have just been pretty much confirmed. I just installed another machine with a 4G "/" partition and again got bad blocks before the last block as shown below. Checking for bad blocks in read-only mode >From block 0 to 3959991 3959988 3959989 3959990 Pass completed, 3 bad blocks found. Anyone else out there ever run a badblocks check on a "/" partition over 3G? (in read-only mode) I'm going for a bug report now since noone seems to know anything about this, here on the mailing list - but I thought it may be of interest. -Cheers -Andrew -- MS ... if only he hadn't been hang gliding! _______________________________________________ Seawolf-list mailing list [EMAIL PROTECTED] https://listman.redhat.com/mailman/listinfo/seawolf-list
