We have a RedHat 8.0 machine with about 10 180 GB hard drives. They are striped using 
software RAID-0 making one big partition (we just need one big bit bucket, performance 
is not that important). We are attempting to do snapshotting of our fileservers so I 
did a backup of about a TB of data. That worked fine however when I tried to do an 
archive copy with using hard links instead of actually copying data ie.

cp -al daily.0 daily.1

The jfs module core dumped. This was using default RH 8.0 kernel ie. 2.4.8-14. I 
upgraded to the latest RH distributed kernel ie 2.4.8-17-8 and tried to delete

daily.1 

however I am seeing the similar kind of problem ie.

Any suggestions ?

Thanks,

diRead: i_ino != di_number
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
jfs_lookup: dtSearch returned 5
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
MetaData crosses page boundary!!
bread failed!
jfs_lookup: dtSearch returned 5
MetaData crosses page boundary!!
bread failed!
jfs_lookup: dtSearch returned 5
MetaData crosses page boundary!!
bread failed!
jfs_lookup: dtSearch returned 5
assert((btstack)->top != &((btstack)->stack[MAXTREEHEIGHT]))
------------[ cut here ]------------
kernel BUG at jfs_dtree.c:3155!
invalid operand: 0000
autofs natsemi 8139too mii iptable_filter ip_tables nls_iso8859-1 jfs mousedev
CPU:    0
EIP:    0010:[<c809286b>]    Not tainted
EFLAGS: 00010286

EIP is at dtReadFirst [jfs] 0x177 (2.4.18-17.8.0)
eax: 0000003d   ebx: c2708b88   ecx: 00000000   edx: c6834000
esi: c2707e68   edi: 00000000   ebp: 00000000   esp: c4669e6c
ds: 0018   es: 0018   ss: 0018
Process updatedb (pid: 1364, stackpage=c4669000)
Stack: c809c1d9 c809e040 c2708af8 c1432d50 c40791e0 00000000 c5a01970 c809252e
       c40791e0 c4669ed4 00000001 c3cb8b40 00000246 c3cb8b40 c012dcad c14c5f74
       00000246 c3cb8cc0 c01471db c14c5f74 c5a01990 fffffffb c80f3d60 c40791e0
Call Trace: [<c809c1d9>] .rodata.str1.1 [jfs] 0x8b5 (0xc4669e6c))
[<c809e040>] .rodata.str1.32 [jfs] 0x1700 (0xc4669e70))
[<c809252e>] jfs_readdir [jfs] 0x77e (0xc4669e88))
[<c012dcad>] kmem_cache_free [kernel] 0x11 (0xc4669ea4))
[<c01471db>] dput [kernel] 0xbb (0xc4669eb4))
[<c80f3d60>] table [nls_iso8859-1] 0x0 (0xc4669ec4))
[<c013cfbf>] cp_new_stat64 [kernel] 0x9b (0xc4669ef8))
[<c014345d>] vfs_readdir [kernel] 0x75 (0xc4669f74))
[<c01436e0>] filldir [kernel] 0x0 (0xc4669f80))
[<c014389b>] sys_getdents [kernel] 0x4b (0xc4669f94))
[<c01436e0>] filldir [kernel] 0x0 (0xc4669f9c))
[<c010894f>] system_call [kernel] 0x33 (0xc4669fc0))


Code: 0f 0b 53 0c e5 c1 09 c8 5e 5f e9 6b ff ff ff 8b 54 24 1c 8b
 DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt
DT_GETPAGE: dtree page corrupt


_____________________________________________________________
Want a new web-based email account ? ---> http://www.firstlinux.net

_____________________________________________________________
Select your own custom email address for FREE! Get [EMAIL PROTECTED] w/No Ads, 6MB, 
POP & more! http://www.everyone.net/selectmail?campaign=tag
_______________________________________________
Jfs-discussion mailing list
[EMAIL PROTECTED]
http://www-124.ibm.com/developerworks/oss/mailman/listinfo/jfs-discussion

Reply via email to