data recovery ext3 (was RE: Recover zip file via Archive::Zip)
While I'm still off topic and speaking of data recovery, has anyone every recovered data from a ext3 filesystem after all utilities have been tried to repair them? I've tried all the utilities off of freshmeat.net and nothing works. I've got bad blocks and i-nodes. Any suggestions are welcome and you can email me off list. as i had mentioned yesterday, i had written a script some time ago that dealt a little with this. i have since found it (or a version of it) in dead tree form- so i retyped it and am sure there are several things wrong with it. i am not sure what all the components do anymore- i did not document it well :P now that i've looked at it, it's really for getting to files that are unlinked etc. so i am not sure it will do you any good. to bring this more on topic, i would like to see what ways something like this can be improved- it served useful to me in the past, but i'm sure it can be made more useful::: #!/usr/bin/perl # added proper things when retyping it: use warnings; use diagnostics; use strict; #--- my $cfile = /tmp/commands.file; my $filesystem =/dev/hda6; my @path = (/tmp/recover,,/recover,,.ebu); #making a path to put my $date=Oct; #just files from October#stuff later open (OUT,$cfile); print OUT open $filesystem\n;# i wonder what this is for? foreach (`/sbin/debugfs -R lsdel /dev/hda6`){#why did i hard code /dev/hda6? #debugfs let's me list a bunch of inodes and i stick the list in a file m/(\d+)/; $path[3]=$1; #had to split this regex to dead with some edge case $1=~m/(\d)/;# but i can't recall what $path[1]=$1; my $quatch = join(,@path); my $place= path[0]$path[1]; print OUT(dump $path[3] $quatch\n) if ((m/$date/)); `mkdir $place`; } - then i chmod 755 command.file? or is it a file used by another tool??? i just don't remember!!! hope others can clarify? it's funny about code... since i've written this, i've learnt how to document things far more completely good lesson to learn! willy -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
data recovery ext3 (was RE: Recover zip file via Archive::Zip)
Hello everyone, __snipped stuff to do with zipped files___ While I'm still off topic and speaking of data recovery, has anyone every recovered data from a ext3 filesystem after all utilities have been tried to repair them? I've tried all the utilities off of freshmeat.net and nothing works. I've got bad blocks and i-nodes. Any suggestions are welcome and you can email me off list. Thanks, Kevin -- K Old [EMAIL PROTECTED] once, a couple of years ago, i used perl and some utility to recover files from a doomed filesystem- i thought it was debugfs, but i looked it up and am now not sure... i will try to find my perlscript... the utility in question allows you to dump inodes as files into another filesystem- if your disk has multiple partitions, or you have multiple disks, you can dump your stuff there.. what the perl script did was to organize them into different directories according to their timestamp.. then i installed a new system (redhat something or other i think) and looked at the files with nautilous- which figured out what /kind/ of file each was... a lot of manual work, but well worth it. saved images, text, and scripts/config files that way. i will look for that script and post it tomorrow if i find it- no sense letting it go to waste. i just wish i could remember the utility that let me dump those inodes.. willy :) -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: data recovery ext3 (was RE: Recover zip file via Archive::Zip)
On Thu, Aug 07, 2003 at 03:09:06PM -0400 West, William M wrote: While I'm still off topic and speaking of data recovery, has anyone every recovered data from a ext3 filesystem after all utilities have been tried to repair them? I've tried all the utilities off of freshmeat.net and nothing works. I've got bad blocks and i-nodes. Any suggestions are welcome and you can email me off list. as i had mentioned yesterday, i had written a script some time ago that dealt a little with this. i have since found it (or a version of it) in dead tree form- so i retyped it and am sure there are several things wrong with it. i am not sure what all the components do anymore- i did not document it well :P Let me help. :-) now that i've looked at it, it's really for getting to files that are unlinked etc. so i am not sure it will do you any good. Partly it might. The only problem with your script is that it cannot deal with data that is spanning more than 12 inodes (those were usually not in one block but fragmented over the harddisk). A line like this shows such a trickier example: 99526 0 100644 6761321/1027 Sat Feb 2 09:11:58 2002 I don't by hard know what to do with it, but it is laid out in the ext2 undeletion how-to. to bring this more on topic, i would like to see what ways something like this can be improved- it served useful to me in the past, but i'm sure it can be made more useful::: #!/usr/bin/perl # added proper things when retyping it: use warnings; use diagnostics; use strict; #--- my $cfile = /tmp/commands.file; my $filesystem =/dev/hda6; my @path = (/tmp/recover,,/recover,,.ebu); #making a path to put my $date=Oct; #just files from October#stuff later open (OUT,$cfile); print OUT open $filesystem\n;# i wonder what this is for? Debug message? foreach (`/sbin/debugfs -R lsdel /dev/hda6`){#why did i hard code /dev/hda6? #debugfs let's me list a bunch of inodes and i stick the list in a file m/(\d+)/; $path[3]=$1; #had to split this regex to dead with some edge case $1=~m/(\d)/; # but i can't recall what $path[1]=$1; my $quatch = join(,@path); my $place= path[0]$path[1]; print OUT(dump $path[3] $quatch\n) if ((m/$date/)); Essentially, from a line like 2210070 1000 100600 228432/ 6 Wed Jul 23 09:26:10 2003 you extract the inode (2210070) and from that turn my @path = (/tmp/recover,,/recover,,.ebu); into @path = (/tmp/removed, 2, /recover, 2210070, .ebu); So the deleted inode gets dumped into /tmp/removed/2/recover/2210070.ebu This could have been done more easily: @path[3,1] = /((\d)\d+)/; `mkdir $place`; } - then i chmod 755 command.file? or is it a file used by another tool??? command.file is the list of dump directives. It's supposedly a shell script that you can run later. So the above Perl script just generates another script. I am just not sure about print OUT open $filesystem\n; open /dev/hda6 is not a meaningful command in shell scripts AFAIK. Tassilo -- $_=q#,}])!JAPH!qq(tsuJ[{@tnirp}3..0}_$;//::niam/s~=)]3[))_$-3(rellac(=_$({ pam{rekcahbus})(rekcah{lrePbus})(lreP{rehtonabus})!JAPH!qq(rehtona{tsuJbus#; $_=reverse,s+(?=sub).+q#q!'qq.\t$.'!#+sexisexiixesixeseg;y~\n~~;eval -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: data recovery ext3 (was RE: Recover zip file via Archive::Zi p)
now that i've looked at it, it's really for getting to files that are unlinked etc. so i am not sure it will do you any good. Partly it might. The only problem with your script is that it cannot deal with data that is spanning more than 12 inodes (those were usually not in one block but fragmented over the harddisk). A line like this shows such a trickier example: 99526 0 100644 6761321/1027 Sat Feb 2 09:11:58 2002 I don't by hard know what to do with it, but it is laid out in the ext2 undeletion how-to. well, the undeletion howto is a little old (1999) but interesting to look through... http://tldp.org/HOWTO/Ext2fs-Undeletion.html#toc9 to bring this more on topic, i would like to see what ways something like this can be improved- it served useful to me in the past, but i'm sure it can be made more useful::: #!/usr/bin/perl # added proper things when retyping it: use warnings; use diagnostics; use strict; #--- my $cfile = /tmp/commands.file; my $filesystem =/dev/hda6; my @path = (/tmp/recover,,/recover,,.ebu); #making a path to put my $date=Oct; #just files from October#stuff later open (OUT,$cfile); print OUT open $filesystem\n;# i wonder what this is for? Debug message? no! :) debugfs -f /filepath :) open opens the filesystem without mounting it... then dump does its thing- all inside debugfs!! command.file is the list of dump directives. It's supposedly a shell script that you can run later. So the above Perl script just generates another script. I am just not sure about print OUT open $filesystem\n; open /dev/hda6 is not a meaningful command in shell scripts AFAIK. and the mystery is solved!!! now i would like to look into using this to make a perl script to automate file recovery well, someday at anyrate! willy :) -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: data recovery ext3 (was RE: Recover zip file via Archive::Zi p)
From: Tassilo von Parseval [mailto:[EMAIL PROTECTED] Subject: Re: data recovery ext3 (was RE: Recover zip file via Archive::Zip) On Thu, Aug 07, 2003 at 03:09:06PM -0400 West, William M wrote: i am not sure what all the components do anymore- i did not document it well :P Let me help. :-) why thank you :) now that i've looked at it, it's really for getting to files that are unlinked etc. so i am not sure it will do you any good. Partly it might. The only problem with your script is that it cannot deal with data that is spanning more than 12 inodes (those were usually not in one block but fragmented over the harddisk). A line like this shows such a trickier example: 99526 0 100644 6761321/1027 Sat Feb 2 09:11:58 2002 well.. this is interesting- i am not sure how to interpret this line properly. also, i assume that even split inodes will all get shoved through the script... so, perhaps there is a way to concatonate/rename the split inodes? or is there no way to see which belongs to which group? I don't by hard know what to do with it, but it is laid out in the ext2 undeletion how-to. well- this gives me a place to start looking. :) to bring this more on topic, i would like to see what ways something like this can be improved- it served useful to me in the past, but i'm sure it can be made more useful::: #!/usr/bin/perl # added proper things when retyping it: use warnings; use diagnostics; use strict; #--- my $cfile = /tmp/commands.file; my $filesystem =/dev/hda6; my @path = (/tmp/recover,,/recover,,.ebu); #making a path to put my $date=Oct; #just files from October#stuff later open (OUT,$cfile); print OUT open $filesystem\n;# i wonder what this is for? Debug message? no- i wasn't all that sophisticated.. *shrug* foreach (`/sbin/debugfs -R lsdel /dev/hda6`){#why did i hard code /dev/hda6? #debugfs let's me list a bunch of inodes and i stick the list in a file m/(\d+)/; $path[3]=$1; #had to split this regex to dead with some edge case $1=~m/(\d)/; # but i can't recall what $path[1]=$1; my $quatch = join(,@path); my $place= path[0]$path[1]; print OUT(dump $path[3] $quatch\n) if ((m/$date/)); Essentially, from a line like 2210070 1000 100600 228432/ 6 Wed Jul 23 09:26:10 2003 you extract the inode (2210070) and from that turn my @path = (/tmp/recover,,/recover,,.ebu); into @path = (/tmp/removed, 2, /recover, 2210070, .ebu); So the deleted inode gets dumped into /tmp/removed/2/recover/2210070.ebu ^^^--- typo! *laugh* This could have been done more easily: @path[3,1] = /((\d)\d+)/; fantastic! i have never been terribly good with regexes and am trying to avoid learning them again until perl6 *laugh* i must to too darn lazy!! `mkdir $place`; } - then i chmod 755 command.file? or is it a file used by another tool??? command.file is the list of dump directives. It's supposedly a shell script that you can run later. So the above Perl script just generates another script. I am just not sure about print OUT open $filesystem\n; open /dev/hda6 is not a meaningful command in shell scripts AFAIK. i agree- but something is nagging at the back of my head, telling me that it was useful perhaps it is readable by another command... instead of using mount... *sigh* i don't know :P Tassilo i'll try to rewrite it... see if i can get it to work again :) -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RECOVERY
hi dear team. hope you be fine. I dorp one important table in postgres:( how can I recover it ?? thx for your favour. thx for your time. _ Have a nice day Sincerely yours Nafiseh Saberi The amount of beauty required to launch one ship. I appreciate your sugesstions
Re: RECOVERY
I dorp one important table in postgres:( how can I recover it ?? thx for your favour. thx for your time. Uh, oh... you're probably out of luck. Unless you have a backup of the data or the original SQL you used to create the table, you won't be able to recover your table. BTW, you might want to join the PostgreSQL beginners list (I think it's called Novice). You can get to it via www.postgresql.org. A lot fo the actual PostgreSQL developers hang out on that list. -- Brett http://www.chapelperilous.net/ BEWARE! People acting under the influence of human nature. -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]