Removed using the tools.

I did know that one snapshot in a string of snapshots was corrupt. However the snapshots before and after were ok. Indeed I had reverted to them at times to check things. The final snapshot was fine as well as two or three snapshots after the corrupt snapshot. So I didn't need all the other snapshots. I could have removed them one by one but chose to remove all snapshots... an option in the tools.

The server booted into linux after the removal of all snapshots but the filesystem was readonly. I rebooted again and kernel panic!

Rebooted with linux rescue and started e2fsck to repair the filesystem. Still waiting for it to finish...

Ben




On 13/08/2010 10:38 AM, Jake Anderson wrote:
On 13/08/10 09:43, Ben Donohue wrote:
 Hi Amos,

Actually I was typing the command incorrectly... I'll admit it...
So I didn't need to find an alternate superblock, however I did have to run e2fsck.

e2fsck is still running... I wonder if anyone else has had e2fsck running for a few days fixing errors and after that the system came up ok?

Is it a given that if it takes a few days to repair a filesystem, it must be smashed beyond repair?... I'll know when this thing finishes!

Yes I have a backup of the data... just interested in whether e2fsck will fix it.

Last time I remove all snapshots at once from a ESX server! One at a time now...

Anyway to find the other superblocks I found the answer on...

http://www.cyberciti.biz/faq/recover-bad-superblock-from-corrupted-partition/

|dumpe2fs /dev/sda2 | grep superblock|

...gives the locations of alternate superblocks.

Ben

Are you removing them using the vmware tools or just deleting the files it makes?

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to