Re: Corrupted ext3 filesystem! How can I fix?

2003-11-19 Thread Jae-hwa Park
Thank you for your considerations.

Anyway, it doesn't work. I've tried 'mke2fs -n' option and e2fck -b {many
backups of superblock}. But I couldn't check the file system corrupted.
Any other way?

[EMAIL PROTECTED]:~# mke2fs -n /dev/data_vg/lvol1
mke2fs 1.26 (3-Feb-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
5242880 inodes, 10472448 blocks
523622 blocks (5.00%) reserved for the super user
First data block=0
320 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000, 7962624

[EMAIL PROTECTED]:~# e2fsck -b 32768 /dev/data_vg/lvol1
e2fsck 1.26 (3-Feb-2002)
e2fsck: Bad magic number in super-block while trying to open
/dev/data_vg/lvol1

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 

[EMAIL PROTECTED]:~# e2fsck -b 98304 /dev/data_vg/lvol1
e2fsck 1.26 (3-Feb-2002)
e2fsck: Bad magic number in super-block while trying to open
/dev/data_vg/lvol1

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 
...
[EMAIL PROTECTED]:~# e2fsck -b 7962624 /dev/data_vg/lvol1
e2fsck 1.26 (3-Feb-2002)
e2fsck: Attempt to read block from filesystem resulted in short read while
trying to open /dev/data_vg/lvol1
Could this be a zero-length partition?

On Wed, Nov 19, 2003 at 01:40:37PM +0100, Giorgio Bellussi wrote:
> from `man mke2fs':
> ...
>
>   -n causes  mke2fs  to not actually create a filesystem, but
> display
>  what it would do if it were to create a filesystem.  This
> can be
>  used  to  determine the location of the backup superblocks
> for a
>  particular filesystem, so long as  the  mke2fs
> parameters  that
>  were  passed when the filesystem was originally created
> are used
>  again.  (With the -n option added, of course!)
>
> 
>
>
> Doug Griswold wrote:
>
> >You should be able to use a backup superblock on that filesystem.  You
> >can use e2fsck -b x  where x is the location of the backup
> >superblock.  This location changes depending on block size of the
> >filesystem.  If your filesystem was created with 4k block size the your
> >first backup super block would be at 32768.  You can also se dumpe2fs to
> >find this information on your other filesystems.
> >
> >
> >Good luck
> >
> >
> >
> Jae-hwa Park <[EMAIL PROTECTED]> 11/19/03 04:34 AM >>>
> 
> 
> >Hello, gurus!
> >
> >
> >
> >Please help me. We're using Linux on zSeries (RedHat 7.2) now. After
> >z/VM
> >
> >crashed,
> >
> >linux server(ceuna) have some filesystem error like the below.
> >
> >
> >
> >Are there any way to recover this problem?
> >
> >
> >
> >[EMAIL PROTECTED]:~# pvscan
> >
> >pvscan -- reading all physical volumes (this may take a while...)
> >
> >pvscan -- ACTIVE   PV "/dev/dasdg1"  of VG "ora_vg"  [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdh1"  of VG "ora_vg"  [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdi1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdj1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdk1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdl1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdm1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdn1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdo1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdp1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdq1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdr1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasds1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdz1"  of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdaa1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdab1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdac1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdad1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdae1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- ACTIVE   PV "/dev/dasdaf1" of VG "data_vg" [2.29 GB / 0 free]
> >
> >pvscan -- total: 20 [45.84 GB] / in use: 20 [45.84 GB] / in no 

Re: Corrupted ext3 filesystem! How can I fix?

2003-11-19 Thread Giorgio Bellussi
from `man mke2fs':
...
  -n causes  mke2fs  to not actually create a filesystem, but
display
 what it would do if it were to create a filesystem.  This
can be
 used  to  determine the location of the backup superblocks
for a
 particular filesystem, so long as  the  mke2fs
parameters  that
 were  passed when the filesystem was originally created
are used
 again.  (With the -n option added, of course!)


Doug Griswold wrote:

You should be able to use a backup superblock on that filesystem.  You
can use e2fsck -b x  where x is the location of the backup
superblock.  This location changes depending on block size of the
filesystem.  If your filesystem was created with 4k block size the your
first backup super block would be at 32768.  You can also se dumpe2fs to
find this information on your other filesystems.
Good luck



Jae-hwa Park <[EMAIL PROTECTED]> 11/19/03 04:34 AM >>>


Hello, gurus!



Please help me. We're using Linux on zSeries (RedHat 7.2) now. After
z/VM
crashed,

linux server(ceuna) have some filesystem error like the below.



Are there any way to recover this problem?



[EMAIL PROTECTED]:~# pvscan

pvscan -- reading all physical volumes (this may take a while...)

pvscan -- ACTIVE   PV "/dev/dasdg1"  of VG "ora_vg"  [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdh1"  of VG "ora_vg"  [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdi1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdj1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdk1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdl1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdm1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdn1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdo1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdp1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdq1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdr1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasds1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdz1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdaa1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdab1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdac1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdad1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdae1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdaf1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- total: 20 [45.84 GB] / in use: 20 [45.84 GB] / in no VG: 0 [0]



[EMAIL PROTECTED]:~# lvscan

lvscan -- ACTIVE"/dev/data_vg/lvol1" [39.95 GB]

lvscan -- ACTIVE"/dev/data_vg/lvol2" [1.18 GB]

lvscan -- ACTIVE"/dev/ora_vg/ora_lv1" [4.57 GB]

lvscan -- 3 logical volumes with 45.70 GB total in 2 volume groups

lvscan -- 3 active logical volumes



[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol1

--- Logical volume ---

LV Name/dev/data_vg/lvol1

VG Namedata_vg

LV Write Accessread/write

LV Status  available

LV #   1

# open 0

LV Size39.95 GB

Current LE 10227

Allocated LE   10227

Allocation next free

Read ahead sectors 120

Block device   58:1



[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol2

--- Logical volume ---

LV Name/dev/data_vg/lvol2

VG Namedata_vg

LV Write Accessread/write

LV Status  available

LV #   2

# open 0

LV Size1.18 GB

Current LE 303

Allocated LE   303

Allocation next free

Read ahead sectors 120

Block device   58:2



[EMAIL PROTECTED]:~# vgdisplay

--- Volume group ---

VG Name   data_vg

VG Access read/write

VG Status available/resizable

VG #  1

MAX LV256

Cur LV2

Open LV   0

MAX LV Size   255.99 GB

Max PV256

Cur PV18

Act PV18

VG Size   41.13 GB

PE Size   4 MB

Total PE  10530

Alloc PE / Size   10530 / 41.13 GB

Free  PE / Size   0 / 0

VG UUID   bdggwE-vRSw-w05h-CPZJ-nUnU-S5fe-io4wh6



--- Volume group ---

VG Name   ora_vg

VG Access read/write

VG Status available/resizable

VG #  0

MAX LV256

Cur LV1

Open LV   1

MAX LV Size   255.99 GB

Max PV256

Cur PV2

Act PV2

VG Size  

Re: Corrupted ext3 filesystem! How can I fix?

2003-11-19 Thread Doug Griswold
You should be able to use a backup superblock on that filesystem.  You
can use e2fsck -b x  where x is the location of the backup
superblock.  This location changes depending on block size of the
filesystem.  If your filesystem was created with 4k block size the your
first backup super block would be at 32768.  You can also se dumpe2fs to
find this information on your other filesystems.


Good luck

>>> Jae-hwa Park <[EMAIL PROTECTED]> 11/19/03 04:34 AM >>>
Hello, gurus!



Please help me. We're using Linux on zSeries (RedHat 7.2) now. After
z/VM

crashed,

linux server(ceuna) have some filesystem error like the below.



Are there any way to recover this problem?



[EMAIL PROTECTED]:~# pvscan

pvscan -- reading all physical volumes (this may take a while...)

pvscan -- ACTIVE   PV "/dev/dasdg1"  of VG "ora_vg"  [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdh1"  of VG "ora_vg"  [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdi1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdj1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdk1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdl1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdm1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdn1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdo1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdp1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdq1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdr1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasds1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdz1"  of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdaa1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdab1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdac1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdad1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdae1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- ACTIVE   PV "/dev/dasdaf1" of VG "data_vg" [2.29 GB / 0 free]

pvscan -- total: 20 [45.84 GB] / in use: 20 [45.84 GB] / in no VG: 0 [0]



[EMAIL PROTECTED]:~# lvscan

lvscan -- ACTIVE"/dev/data_vg/lvol1" [39.95 GB]

lvscan -- ACTIVE"/dev/data_vg/lvol2" [1.18 GB]

lvscan -- ACTIVE"/dev/ora_vg/ora_lv1" [4.57 GB]

lvscan -- 3 logical volumes with 45.70 GB total in 2 volume groups

lvscan -- 3 active logical volumes



[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol1

--- Logical volume ---

LV Name/dev/data_vg/lvol1

VG Namedata_vg

LV Write Accessread/write

LV Status  available

LV #   1

# open 0

LV Size39.95 GB

Current LE 10227

Allocated LE   10227

Allocation next free

Read ahead sectors 120

Block device   58:1



[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol2

--- Logical volume ---

LV Name/dev/data_vg/lvol2

VG Namedata_vg

LV Write Accessread/write

LV Status  available

LV #   2

# open 0

LV Size1.18 GB

Current LE 303

Allocated LE   303

Allocation next free

Read ahead sectors 120

Block device   58:2



[EMAIL PROTECTED]:~# vgdisplay

--- Volume group ---

VG Name   data_vg

VG Access read/write

VG Status available/resizable

VG #  1

MAX LV256

Cur LV2

Open LV   0

MAX LV Size   255.99 GB

Max PV256

Cur PV18

Act PV18

VG Size   41.13 GB

PE Size   4 MB

Total PE  10530

Alloc PE / Size   10530 / 41.13 GB

Free  PE / Size   0 / 0

VG UUID   bdggwE-vRSw-w05h-CPZJ-nUnU-S5fe-io4wh6



--- Volume group ---

VG Name   ora_vg

VG Access read/write

VG Status available/resizable

VG #  0

MAX LV256

Cur LV1

Open LV   1

MAX LV Size   255.99 GB

Max PV256

Cur PV2

Act PV2

VG Size   4.57 GB

PE Size   4 MB

Total PE  1170

Alloc PE / Size   1170 / 4.57 GB

Free  PE / Size   0 / 0

VG UUID   3Tp2pa-bKCn-JEWF-sp3E-9U7H-bIzO-v1Y4Rk



[EMAIL PROTECTED]:~# tune2fs -l /dev/data_vg/lvol1

tune2fs 1.26 (3-Feb-2002)

tune2fs: Bad magic number in super-block while trying to open

/dev/data_vg/lvol1

Couldn't find valid filesystem superblock.



[EMAIL PROTECTED]:~# tune2fs -l /dev/data_vg/lvol2

tune2fs 1.26 (3-Feb-2002)

tune2fs: Bad 

Corrupted ext3 filesystem! How can I fix?

2003-11-19 Thread Jae-hwa Park
Hello, gurus!

Please help me. We're using Linux on zSeries (RedHat 7.2) now. After z/VM
crashed,
linux server(ceuna) have some filesystem error like the below.

Are there any way to recover this problem?

[EMAIL PROTECTED]:~# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE   PV "/dev/dasdg1"  of VG "ora_vg"  [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdh1"  of VG "ora_vg"  [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdi1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdj1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdk1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdl1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdm1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdn1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdo1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdp1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdq1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdr1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasds1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdz1"  of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdaa1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdab1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdac1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdad1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdae1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- ACTIVE   PV "/dev/dasdaf1" of VG "data_vg" [2.29 GB / 0 free]
pvscan -- total: 20 [45.84 GB] / in use: 20 [45.84 GB] / in no VG: 0 [0]

[EMAIL PROTECTED]:~# lvscan
lvscan -- ACTIVE"/dev/data_vg/lvol1" [39.95 GB]
lvscan -- ACTIVE"/dev/data_vg/lvol2" [1.18 GB]
lvscan -- ACTIVE"/dev/ora_vg/ora_lv1" [4.57 GB]
lvscan -- 3 logical volumes with 45.70 GB total in 2 volume groups
lvscan -- 3 active logical volumes

[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol1
--- Logical volume ---
LV Name/dev/data_vg/lvol1
VG Namedata_vg
LV Write Accessread/write
LV Status  available
LV #   1
# open 0
LV Size39.95 GB
Current LE 10227
Allocated LE   10227
Allocation next free
Read ahead sectors 120
Block device   58:1

[EMAIL PROTECTED]:~# lvdisplay /dev/data_vg/lvol2
--- Logical volume ---
LV Name/dev/data_vg/lvol2
VG Namedata_vg
LV Write Accessread/write
LV Status  available
LV #   2
# open 0
LV Size1.18 GB
Current LE 303
Allocated LE   303
Allocation next free
Read ahead sectors 120
Block device   58:2

[EMAIL PROTECTED]:~# vgdisplay
--- Volume group ---
VG Name   data_vg
VG Access read/write
VG Status available/resizable
VG #  1
MAX LV256
Cur LV2
Open LV   0
MAX LV Size   255.99 GB
Max PV256
Cur PV18
Act PV18
VG Size   41.13 GB
PE Size   4 MB
Total PE  10530
Alloc PE / Size   10530 / 41.13 GB
Free  PE / Size   0 / 0
VG UUID   bdggwE-vRSw-w05h-CPZJ-nUnU-S5fe-io4wh6

--- Volume group ---
VG Name   ora_vg
VG Access read/write
VG Status available/resizable
VG #  0
MAX LV256
Cur LV1
Open LV   1
MAX LV Size   255.99 GB
Max PV256
Cur PV2
Act PV2
VG Size   4.57 GB
PE Size   4 MB
Total PE  1170
Alloc PE / Size   1170 / 4.57 GB
Free  PE / Size   0 / 0
VG UUID   3Tp2pa-bKCn-JEWF-sp3E-9U7H-bIzO-v1Y4Rk

[EMAIL PROTECTED]:~# tune2fs -l /dev/data_vg/lvol1
tune2fs 1.26 (3-Feb-2002)
tune2fs: Bad magic number in super-block while trying to open
/dev/data_vg/lvol1
Couldn't find valid filesystem superblock.

[EMAIL PROTECTED]:~# tune2fs -l /dev/data_vg/lvol2
tune2fs 1.26 (3-Feb-2002)
tune2fs: Bad magic number in super-block while trying to open
/dev/data_vg/lvol2
Couldn't find valid filesystem superblock.

[EMAIL PROTECTED]:~# e2fsck /dev/data_vg/lvol1
e2fsck 1.26 (3-Feb-2002)
Couldn't find ext2 superblock, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open
/dev/data_vg/lvol1

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alter