I've got a Dual Xeon machine running Fedora Core 3 attached to an Apple
(BXserve RAID with 2 x 2.2TB devices.
(BThen I use LVM to put them together into a 4.4TB device. Then there
(Bonce was a JFS filesystem on this 4.4TB device.
(BYesterday the filesystem had gone read-only spitting out lots of these
(BJan 25 03:24:28 higgs08 kernel: ERROR: (device dm-0): DT_GETPAGE: dtree
(Bpage corrupt
(BI unmounted the filesystem and run fsck and it fixed a bunch of stuff
(B(sorry didn't capture that).
(BI remounted it and all was happy until this morning I started getting
(Bthese
(BJan 25 22:15:58 higgs08 kernel: ERROR: (device dm-0): dbAllocNext:
(BCorrupt dmap page
(Band I unmounted the filesystem and now it won't remount. jfs_fsck says
(Bthat both superblocks are corrupt.
(BI used jfs_debug to look at the two superblocks and they both contain
(Bthe same gibberish.
(B
(BI read one place that possible copying a superblock from a fresh
(Bmkfs.jfs of an identical disk could fix this.
(BAny other idea for resurrecting this data?
(B
(B
(Bfsck output
(B[EMAIL PROTECTED] log]# jfs_fsck /dev/VolGroup00/lvol0
(Bjfs_fsck version 1.1.7, 22-Jul-2004
(Bprocessing started: 1/26/2005 14.24.55
(BUsing default parameter: -p
(BThe current device is: /dev/VolGroup00/lvol0
(B
(BThe superblock does not describe a correct jfs file system.
(B
(BIf device /dev/VolGroup00/lvol0 is valid and contains a jfs file system,
(Bthen both the primary and secondary superblocks are corrupt
(Band cannot be repaired, and fsck cannot continue.
(B
(BOtherwise, make sure the entered device /dev/VolGroup00/lvol0 is
(Bcorrect.
(B
(Bjfs_debug output
(B > su
(B[1] s_magic: 'Pm??' [15] s_ait2.addr1: 0x47
(B[2] s_version: 2042101673 [16] s_ait2.addr2:
(B0xdad0bf23
(B[3] s_size: 0x754f77d2ff733ab6 s_ait2.address:
(B308613791523
(B[4] s_bsize: -551086469 [17] s_logdev:
(B0xfc60bfc6
(B[5] s_l2bsize: -28929 [18] s_logserial:
(B0x87fec159
(B[6] s_l2bfactor: 14907 [19] s_logpxd.len: 10252845
(B[7] s_pbsize: 2041535975 [20] s_logpxd.addr1:
(B0xeb
(B[8] s_l2pbsize: 20148 [21] s_logpxd.addr2:
(B0xdad36af3
(B[9] pad: Not Displayed s_logpxd.address:
(B1012988603123
(B[10] s_agsize: 0x0eacfdf4 [22] s_fsckpxd.len: 15211755
(B[11] s_flag: 0x81763500 [23] s_fsckpxd.addr1: 0x87
(B [24] s_fsckpxd.addr2:
(B0x86413b36
(B JFS_COMMIT JFS_GROUPCOMMIT s_fsckpxd.address:
(B582073006902
(B [25] s_time.tv_sec:
(B0xaf1831dd
(B JFS_SPARSE [26] s_time.tv_nsec:
(B0x0f1820be
(B DASD_ENABLED [27] s_fpack:
(B'i?>z??u0?'
(B[12] s_state: 0x00000002
(B FM_DIRTY
(B[13] s_compress: -1951450971
(B[14] s_ait2.len: 10252845
(B
(Bdisplay_super: [m]odify or e[x]it: x
(B > s2p
(B[1] s_magic: 'Pm??' [16] s_aim2.len: 15262231
(B[2] s_version: 2042101673 [17] s_aim2.addr1:
(B0xfb
(B[3] s_size: 0x754f77d2ff733ab6 [18] s_aim2.addr2:
(B0xf541734b
(B[4] s_bsize: -551086469 s_aim2.address:
(B1082151498571
(B[5] s_l2bsize: -28929 [19] s_logdev:
(B0xfc60bfc6
(B[6] s_l2bfactor: 14907 [20] s_logserial:
(B0x87fec159
(B[7] s_pbsize: 2041535975 [21] s_logpxd.len:
(B10252845
(B[8] s_l2pbsize: 20148 [22] s_logpxd.addr1: 0xeb
(B[9] s_agsize: 0x0eacfdf4 [23] s_logpxd.addr2:
(B0xdad36af3
(B[10] s_flag: 0x81763500 s_logpxd.address:
(B1012988603123
(B [24] s_fsckpxd.len: 15211755
(B GROUPCOMMIT [25] s_fsckpxd.addr1: 0x87
(B SPARSE [26] s_fsckpxd.addr2:
(B0x86413b36
(B DASD_ENABLED s_fsckpxd.address:
(B582073006902
(B[11] s_state: 0x00000002 [27] s_fsckloglen:
(B-382147501
(B DIRTY [28] s_fscklog: 75
(B[12] s_compress: -1951450971 [29] s_fpack:
(B'i?>z??uS?8[?????N?E'?7?C????$B&)(B?????{??
(B[13] s_ait2.len: 10252845
(B[14] s_ait2.addr1: 0x47
(B[15] s_ait2.addr2: 0xdad0bf23
(B s_ait2.address: 308613791523
(Bdisplay_super2: [m]odify or e[x]it: x
(B >
(B
(B
(BLMV info
(B[EMAIL PROTECTED] ~]# pvdisplay
(B --- Physical volume ---
(B PV Name /dev/sda
(B VG Name VolGroup00
(B PV Size 2.18 TB / not usable 2.00 TB
(B Allocatable yes (but full)
(B PE Size (KByte) 4096
(B Total PE 572320
(B Free PE 0
(B Allocated PE 572320
(B PV UUID f5dplL-wxvI-El0p-yszB-cLhd-5vQC-5QiBEO
(B
(B --- Physical volume ---
(B PV Name /dev/sdb
(B VG Name VolGroup00
(B PV Size 2.18 TB / not usable 2.00 TB
(B Allocatable yes (but full)
(B PE Size (KByte) 4096
(B Total PE 572320
(B Free PE 0
(B Allocated PE 572320
(B PV UUID LU9kbu-Yn8h-68I4-VNuu-yATm-67vP-a28VsJ
(B
(B[EMAIL PROTECTED] ~]# vgdisplay
(B --- Volume group ---
(B VG Name VolGroup00
(B System ID
(B Format lvm2
(B Metadata Areas 2
(B Metadata Sequence No 4
(B VG Access read/write
(B VG Status resizable
(B MAX LV 0
(B Cur LV 1
(B Open LV 0
(B Max PV 0
(B Cur PV 2
(B Act PV 2
(B VG Size 4.37 TB
(B PE Size 4.00 MB
(B Total PE 1144640
(B Alloc PE / Size 1144640 / 4.37 TB
(B Free PE / Size 0 / 0
(B VG UUID lR6p9W-bccA-6Udi-Va98-oEI4-Ft8X-rcuRn6
(B
(B[EMAIL PROTECTED] ~]# lvdisplay
(B --- Logical volume ---
(B LV Name /dev/VolGroup00/lvol0
(B VG Name VolGroup00
(B LV UUID 4TIAIn-XFaI-LMuI-O5SI-vm6W-2nFJ-nccoNC
(B LV Write Access read/write
(B LV Status available
(B # open 0
(B LV Size 4.37 TB
(B Current LE 1144640
(B Segments 1
(B Allocation inherit
(B Read ahead sectors 0
(B Block device 253:0
(B
(B
(BSean Murphy
(B[EMAIL PROTECTED]
(B
(B_______________________________________________
(BJfs-discussion mailing list
([email protected]
(Bhttp://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion
- Re: [Jfs-discussion] Corrupted 4.4TB JFS FS Sean Murphy
- Re: [Jfs-discussion] Corrupted 4.4TB JFS FS Dave Kleikamp
- Re: [Jfs-discussion] Corrupted 4.4TB JFS FS Sean Murphy
- Re: [Jfs-discussion] Corrupted 4.4TB JFS FS Dave Kleikamp
- Re: [Jfs-discussion] Corrupted 4.4TB JFS FS Sean Murphy
- Re: [Jfs-discussion] Corrupted 4.4TB J... Dave Kleikamp
- Re: [Jfs-discussion] Corrupted 4.4... Sean Murphy
- Re: [Jfs-discussion] Corrupted... Dave Kleikamp
- Re: [Jfs-discussion] Corrupted... Sean Murphy
- Re: [Jfs-discussion] Corrupted... Dave Kleikamp
- Re: [Jfs-discussion] Corrupted... Sean Murphy
