Please don't reply to lustre-devel. Instead, comment in Bugzilla by using the 
following link:
https://bugzilla.lustre.org/show_bug.cgi?id=11504



Running it with cfs4 (llnl's modified version) didn't hit the sefault:

# pigs38 /var/tmp > gdb /sbin/e2fsck GNU gdb Red Hat Linux 
(6.3.0.0-1.132.1llnlrh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db
library "/lib64/tls/libthread_db.so.1".

(gdb) run -y -f -v /dev/sdb
Starting program: /sbin/e2fsck -y -f -v /dev/sdb
warning: shared library handler failed to enable breakpoint
e2fsck 1.39.cfs4 (14-Nov-2006)
Pass 1: Checking inodes, blocks, and sizes
Inode 84852775, i_size is 17592118779904, should be 17592118779904.  Fix? yes

Inode 101122093 has EXTENT_FL set, but is not in extents format
Fix? yes

Inode 101122093, i_blocks is 8, should be 0.  Fix? yes

Inode 112312396, i_size is 17591978426368, should be 17591978426368.  Fix? yes

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences:  -809065125
Fix? yes

Free blocks count wrong for group #24690 (30581, counted=30582).
Fix? yes

Free blocks count wrong (885705431, counted=885705432).
Fix? yes


/dev/sdb: ***** FILE SYSTEM WAS MODIFIED *****

  818586 inodes used (0.67%)
    5072 non-contiguous inodes (0.6%)
         # of inodes with ind/dind/tind blocks: 1046/1/0
90792744 blocks used (9.30%)
       0 bad blocks
       7 large files

  818539 regular files
      38 directories
       0 character device files
       0 block device files
       0 fifos
       0 links
       0 symbolic links (0 fast symbolic links)
       0 sockets
--------
  818577 files

_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel

Reply via email to