Re: exclude files and large partitions
>1) Do I need to put the exclude files on each client that makes use of >them, or do they reside on the server? They need to be on the client. >2) I am trying to backup a partition that is too large to fit on a >single tape. I have tried to "thin-out" the amount of data that gets >archived by using exclude files, however amanda still complains that >the partition will not fit on a single tape. Do you know if the exclusions are working? There are two ways to tell. First is to look in amdump. in the amanda.conf directory. Find where the estimates are being run and see if the size is better than expected. Second is to set up some test cases. GNU tar exclude patterns are an art form :-). Here's a little script I use to test with: ftp://gandalf.cc.purdue.edu/pub/amanda/gtartest-exclude You set up little directories and empty files like that on the live system, set up an exclusion list and see what happens. Then tweak the exclusions until it does what you want. >Justin. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
exclude files and large partitions
I am using gnu-tar and am trying to make use of its "exclude file" capability within Amanda. Can soneone help me out with 2 basic "exclude file" questions? 1) Do I need to put the exclude files on each client that makes use of them, or do they reside on the server? e.g --- in disklist on server ... # fred - test client fred /usr { user-tar exclude list "/usr/local/etc/amanda/xray/fred.exclude" } Should fred.exclude be on the amanda server, or on fred? 2) I am trying to backup a partition that is too large to fit on a single tape. I have tried to "thin-out" the amount of data that gets archived by using exclude files, however amanda still complains that the partition will not fit on a single tape. Is their a work-around for this problem? Thanks, Justin.
Re: Linux and dump
On Thu, May 17, 2001 at 09:53:59PM -0300, Alexandre Oliva wrote: > I can also tell from personal experience that I haven't had trouble > with GNU tar 1.13.17 on GNU/Linux/x86, but I still haven't been able > to do backups reliably with 1.13.19 (it will generally abort part-way > through the back-up; I suspect a network problem, but a bug in GNU tar > still isn't ruled out) My systems (Debian potato): media:~$ tar --version tar (GNU tar) 1.13.17 media:~$ gcc --version 2.95.2 No problems here, except that the kernel isn't best friends with my ATA tape drive sometimes. The joys of an academic budget! :) I've never tried amanda on anything but Linux, so I can't speak about that. sunfreeware.com might have a good version of gcc. -- --Ray - Sotto la panca la capra crepa sopra la panca la capra campa
Re: Linux and dump
On May 17, 2001, "John R. Jackson" <[EMAIL PROTECTED]> wrote: >> Do you know which compiler was used to build this version of GNU tar? > Looks like gcc 2.8.1. Gee. That's dead broken. First thing I'd do would be to get GNU tar 1.13.19 built with a newer compiler. But I can tell from personal experience that, even when built with GCC 2.95.3, on Solaris 7/x86, it wouldn't even be able to do estimates properly. So I stayed with 1.12 for now. Investigating the actual problem is in my to-do list. I can also tell from personal experience that I haven't had trouble with GNU tar 1.13.17 on GNU/Linux/x86, but I still haven't been able to do backups reliably with 1.13.19 (it will generally abort part-way through the back-up; I suspect a network problem, but a bug in GNU tar still isn't ruled out) -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On Thu, May 17, 2001 at 05:15:00PM -0500, John R. Jackson wrote: > >Do you know which compiler was used to build this version of GNU tar? > > Looks like gcc 2.8.1. I was never able to compile GNU tar with gcc 2.8.1 on Solaris. But I work fine if compiled with SUN cc. Jean-Louis -- Jean-Louis Martineau email: [EMAIL PROTECTED] Departement IRO, Universite de Montreal C.P. 6128, Succ. CENTRE-VILLETel: (514) 343-6111 ext. 3529 Montreal, Canada, H3C 3J7Fax: (514) 343-5834
Re: Linux and dump
> Did you try to read this tar-file with some other tar program? ... >... such as GNU tar 1.12? Same result. >> Looks like gcc 2.8.1. >Gee. That's dead broken. ... Yeah, yeah. But you're just a wee bit biased :-) :-). >First thing I'd do would be to get GNU tar >1.13.19 built with a newer compiler. ... So what's the recommended **stable** gcc these days? I'll try to scrounge up the 100+ MBytes and CPU hours it takes to build and give that a try. >Alexandre Oliva John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Linux and dump
On Thu, May 17, 2001 at 06:36:16PM -0300, Alexandre Oliva wrote: > On May 17, 2001, Dan Wilder <[EMAIL PROTECTED]> wrote: > > > Is this an option amrecover can (or does) pass to tar? > > Nope. amrecover explicitly tells tar which files to restore, so this > is not an issue. > Ahh. Good! Amrecover is such a great tool for restoring files, a few or a whole filesystem full. I have a hard time imagining a circumstance that would lead me to going directly to the underlying archive utility. For restoring from Amanda tapes, that is. Be that underlying utiility dump, xfsdump, tar, or what not. -- Dan Wilder
Re: Linux and dump
On Thu, May 17, 2001 at 04:02:19PM -0500, John R. Jackson wrote: > But if'n I were you folks using this version of GNU tar, I'd start > making damn sure your tapes had anything even faintly useful on them. > Amverify should do the trick, although I certainly wouldn't stop there. > > Not to panic you or anything, of course. Have a nice day. :-) Hee, hee! Yeah, conservative is better in backup utilities. I've doggedly installed GNU tar 1.12 plus amanda patches in every system I've been responsible for. Restored lots of files. Never lost one. One key point is test, test, test. Don't ever assume backup is working, especially if you've changed anything. Do some random restores even if you don't have to. Fuss over them. Look at 'em with a fine-tooth comb. I believe this is much more important to safe and successful backups than your choice of archiving utility. Plus, if you make a wrong choice, this is your chance to find out! -- Dan Wilder
Re: Linux and dump
On May 17, 2001, "John R. Jackson" <[EMAIL PROTECTED]> wrote: >> Did you try to read this tar-file with some other tar program? ... ... such as GNU tar 1.12? -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, "John R. Jackson" <[EMAIL PROTECTED]> wrote: > Yeah, yeah. But you're just a wee bit biased :-) :-). Me! No way! :-D >> First thing I'd do would be to get GNU tar >> 1.13.19 built with a newer compiler. ... > So what's the recommended **stable** gcc these days? 2.95.3 > I'll try to scrounge up the 100+ MBytes and CPU hours it takes to > build and give that a try. I could build GNU tar 1.13.19 for you, to run on Solaris 2.6/sparc, if that would help. Just let me know. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
> Summary: "Dump was a stupid program in the first place. Leave it behind." Did you read http://reality.sgi.com/zwicky_neu/testdump.doc.html ? (Elizabeth D. Zwicky, Torture-testing Backup and Archive Programs: Things You Ought to Know But Probably Would Rather Not) Dump rocks!
strange amanda problem
Hi, We here are experiencing a strange problem with our amanda backup system. It is failing with the following message about half of the time. There seems to be no pattern to the failures, ie: certain tapes or certain days of the week. The dump summary seems to suggest a timeout, but we upped the dtimeout value (also follows) based on a previous posting we saw that had a similar error and suggested this was the solution. Have tried re-installing, but still happens. Is there any other timeout that could be causing this? We are running Linux Red Hat 7, and the amanda version is 2.4.2p2. Any help would be appreciated, Thanks, Davin ** failure message: These dumps were to tape DailySet105. The next tape Amanda expects to use is: DailySet106. FAILURE AND STRANGE DUMP SUMMARY: monk.ednet md0 lev 1 FAILED [data timeout] STATISTICS: Total Full Daily Estimate Time (hrs:min)0:01 Run Time (hrs:min) 1:35 Dump Time (hrs:min)0:44 0:00 0:44 Output Size (meg)2223.00.0 2223.0 Original Size (meg) 2223.00.0 2223.0 Avg Compressed Size (%) -- -- --(level:#disks ...) Filesystems Dumped1 0 1 (1:1) Avg Dump Rate (k/s) 857.3-- 857.3 Tape Time (hrs:min)0:44 0:00 0:44 Tape Size (meg) 2223.00.0 2223.0 Tape Used (%) 12.10.0 12.1 (level:#disks ...) Filesystems Taped 1 0 1 (1:1) Avg Tp Write Rate (k/s) 856.9-- 856.9 FAILED AND STRANGE DUMP DETAILS: /-- monk.ednet md0 lev 1 FAILED [data timeout] sendbackup: start [monk.ednet.net:md0 level 1] sendbackup: info BACKUP=/sbin/dump sendbackup: info RECOVER_CMD=/sbin/restore -f... - sendbackup: info end | DUMP: WARNING: There is no inferior level dump on this filesystem | DUMP: WARNING: Assuming a level 0 dump by default | DUMP: Date of this level 0 dump: Thu May 17 01:01:10 2001 | DUMP: Date of last level 0 dump: the epoch | DUMP: Dumping /dev/md0 (/) to standard output | DUMP: Label: / | DUMP: mapping (Pass I) [regular files] | DUMP: mapping (Pass II) [directories] | DUMP: estimated 4891937 tape blocks. | DUMP: Volume 1 started at: Thu May 17 01:01:20 2001 | DUMP: dumping (Pass III) [directories] | DUMP: dumping (Pass IV) [regular files] \ NOTES: taper: tape DailySet105 kb 2276352 fm 1 [OK] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - monk.ednet.n md0 1 FAILED --- monk.ednet.n sdc11 22763202276320 -- 44:15 857.3 44:16 856.9 (brought to you by Amanda version 2.4.2p2) * amanda.conf file timouts: bumpsize 20 Mb # minimum savings (threshold) to bump level 1 -> 2 bumpdays 1 # minimum days at each level bumpmult 4 # threshold = bumpsize * bumpmult^(level-1) etimeout 1200 # number of seconds per filesystem for estimates. #etimeout -600 # total number of seconds for estimates. # a positive number will be multiplied by the number of filesystems on # each host; a negative number will be taken as an absolute total time-out. # The default is 5 minutes per filesystem. dtimeout 2700 # number of idle seconds before a dump is aborted. ctimeout 60 # maximum number of seconds that amcheck waits # for each client host tapebufs 20
Re: Linux and dump
>Just to make sure (and for the enlightenment of anyone else trying to >duplicate the problem): you have removed /tmp/jrj/zli before every tar >command, right? ... Yes. What I posted was really the list of commands I put into a little test script. The cp of /dev/null to zli is part of that and happens on each run. >'cause I can't seem to be able to duplicate this >problem on my machines. ... That also doesn't surprise me. By now this mailing list would have been flooded with "why is Amanda broken" letters :-) if it was universal. Not that that should make anyone feel warm and fuzzy about what *their* GNU tar is doing. >Do you know which compiler was used to build this version of GNU tar? Looks like gcc 2.8.1. >Did you try to read this tar-file with some other tar program? ... Using Solaris tar (not exactly known for its stellar quality, either): $ tar tvf z.tar drwxr-xr-x 10281/1233 62 Jun 8 13:53 1999 07301033644/./ -rwxr-xr-x 10281/1233 10412 Jun 8 13:53 1999 07277623352/./getMailHostName -rw-r--r-- 10281/1233 15044 Jun 7 18:16 1999 07277623352/./res_init.c No complaints, but it also didn't show all four files, just the first two (and with bogus names, but that's just Solaris tar being stupid). AIX tar did the same thing (two files) but without the bogus directory name. On a whim, I changed the test to write to a pipe to cat instead of directing stdout to the file since I know GNU tar looks at what it's writing to (e.g. so it can detect /dev/null). Didn't help. I also tried writing direcly via --file instead of "--file -". Ditto. >Alexandre Oliva John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: GNUTAR backup of VxFS fails
On May 17, 2001, David Carter <[EMAIL PROTECTED]> wrote: > The gtar version is 1.11.8. You need at least 1.12+Amanda's patches (but not 1.13 and, given JJ's recent posting about 1.13.19 on Solaris, I'd stay away from it too, at least for now). > where --file becomes "/dev/null" This is just how estimates are done. sendsize.debug might say more about the reason it failed. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, "John R. Jackson" <[EMAIL PROTECTED]> wrote: > --listed-incremental /tmp/jrj/zli \ > This does what Amanda does for a full dump. Just to make sure (and for the enlightenment of anyone else trying to duplicate the problem): you have removed /tmp/jrj/zli before every tar command, right? 'cause I can't seem to be able to duplicate this problem on my machines. Do you know which compiler was used to build this version of GNU tar? > $ gtar tvf z.tar > drwxr-xr-x jrj/pucc 62 1999-06-08 13:53:30 ./ > -rwxr-xr-x jrj/pucc 10412 1999-06-08 13:53:30 ./getMailHostName > gtar: Unexpected EOF in archive > gtar: Error is not recoverable: exiting now Did you try to read this tar-file with some other tar program? Hmm, I guess this doesn't matter much; it's pretty obvious tar didn't write everything out to the tarball, given how small it is. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, Dan Wilder <[EMAIL PROTECTED]> wrote: > Is this an option amrecover can (or does) pass to tar? Nope. amrecover explicitly tells tar which files to restore, so this is not an issue. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, "John R. Jackson" <[EMAIL PROTECTED]> wrote: > Oh, shit. I wouldn't have put it better :-) > I tried GNU tar 1.12 (plus the Amanda patches) and it works fine. Thanks God I'm still using that one, on most systems I've got. Talk `conservative' regarding backups. > But if'n I were you folks using this version of GNU tar, I'd start > making damn sure your tapes had anything even faintly useful on them. This is good advice, regardless of the backup tool you're using. > Not to panic you or anything, of course. Have a nice day. :-) :-) -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: GNUTAR backup of VxFS fails
>Running amanda 2.4.1p2 on Solaris 2.6. The gtar version is 1.11.8. ... You have to run GNU tar 1.12 with the patches from www.amanda.org. There is also a 1.13.19, but I just found some serious problems and wouldn't touch it with a 10 foot pole. >David Carter John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
GNUTAR backup of VxFS fails
Running amanda 2.4.1p2 on Solaris 2.6. The gtar version is 1.11.8. All the files systems are ufs except for one Veritas VxFS files system. The VxFS is set to run with the GNUTAR dumptype and was working for many months. Lately, however, amanda gives me the "syndesis1 /a lev 0 FAILED [disk /a offline on syndesis1?]" error. According to the /tmp/amanda/runtar.debug file on syndesis1, the gtar dump is initiating but is now going directly to /dev/null. Any idea why this is happening, or where --file becomes "/dev/null"? Maybe I'm reading it wrong but that looks like what's happening. Also, a file called /usr/local/var/amanda/gnutar-lists/syndesis1_a_1 exists, but is is mod-dated from the last amanda run where this dump didn't come up as "FAILED--". /tmp/amanda/runtar.debug: runtar: debug 1 pid 1863 ruid 0 euid 0 start time Wed May 16 01:55:03 2001 /usr/local/bin/gtar: version 2.4.1p1 running: /usr/local/bin/gtar: /usr/local/bin/gtar --create --directory /a --listed -incremental /usr/local/var/amanda/gnutar-lists/syndesis1_a_1.new --sparse --one-f ile-system --ignore-failed-read --totals --file /dev/null . David Carter McLeodUSA Information Systems [EMAIL PROTECTED] 281-465-1835
Re: Linux and dump
I was going to stay out of yet another round of "dump vs tar", but what the hell. Here's a little eye-opener for all you tar folks ... $ (umask 077 ; mkdir /tmp/jrj) $ cd /tmp/jrj $ rm z.tar zli $ cp /dev/null zli $ gtar --version tar (GNU tar) 1.13.19 ... $ uname -srp SunOS 5.6 sparc $ gtar --create \ --file - \ --directory /work/tmp/resolv \ --one-file-system \ --listed-incremental /tmp/jrj/zli \ --sparse \ --ignore-failed-read \ --totals \ . > z.tar This does what Amanda does for a full dump. It creates an empty listed incremental file then does the deed. The /work/tmp/resolv directory has four files in it and was just a random choice of something to test with: $ ls -lR /work/tmp/resolv /work/tmp/resolv: total 82 -rwxr-xr-x 1 jrj pucc 10412 Jun 8 1999 getMailHostName -rw-r--r-- 1 jrj pucc3028 Jun 8 1999 getMailHostName.c -rw-r--r-- 1 jrj pucc 15044 Jun 7 1999 res_init.c -rw-r--r-- 1 jrj pucc 12170 Jun 7 1999 res_query.c Here's the output of the gtar --create: Total bytes written: 10240 (10kB, 791kB/s) Fine. Everything's hunky, dory. Hmmm, how about we do a "tar t" on that: $ gtar tvf z.tar drwxr-xr-x jrj/pucc 62 1999-06-08 13:53:30 ./ -rwxr-xr-x jrj/pucc 10412 1999-06-08 13:53:30 ./getMailHostName gtar: Unexpected EOF in archive gtar: Error is not recoverable: exiting now Oh, shit. I've tried this on several directories. They all fail. GNU tar (1.13.19) is broken, at least on my Solaris box. And because GNU tar didn't say squat about any problems, you'd never know it happened unless you looked really closely at the E-mail output (e.g. the dump size is too small), or tried to do an amrecover (the file list would be wrong because the "tar t" it runs would have failed) or a real restore. I tried GNU tar 1.12 (plus the Amanda patches) and it works fine. >You've been lucky for years. Some day, [dump will] bite you, and you'll >regretfully remember this discussion. ... >So your point is that dump should refuse to run on a mounted >filesystem? Yeah, that sounds reasonable to me. Then more people >would learn about its limitations and switch to some saner backup >tool. And which tool would that be? :-) I just found this problem a couple of days ago (while looking into George Herson's problem) and haven't had time to start tracking it down or report it. But if'n I were you folks using this version of GNU tar, I'd start making damn sure your tapes had anything even faintly useful on them. Amverify should do the trick, although I certainly wouldn't stop there. Not to panic you or anything, of course. Have a nice day. :-) >Alexandre Oliva John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Linux and dump
I think I was the one who made the suggestion to remove ext2 dump, but I was wrong, you don't have to do that. ./configure will find both xfsdump and dump, and amanda will choose whichever program is appropriate for the type of filesystem i.e. if it is XFS filesystem and you have not specified GNUTAR, xfsdump will be used automatically. Alexandre Oliva wrote: > On May 17, 2001, "C. Chan" <[EMAIL PROTECTED]> wrote: > > > I followed suggestions to remove ext2 dump so > > Amanda would detect xfsdump and recompiled, but I find this rather inelegant. > > What is inelegant? Removing ext2 dump? You didn't have to do that. > You only need xfsdump available at configure time to get the xfsdump > supporting bits enabled; the existence of ext2 dump doesn't make a > difference. -- "Jonathan F. Dill" ([EMAIL PROTECTED]) CARB Systems and Network Administrator Home Page: http://www.umbi.umd.edu/~dill
Re: Linux and dump
On Thu, May 17, 2001 at 05:09:16PM -0300, Alexandre Oliva wrote: > On May 17, 2001, Ron Stanonik <[EMAIL PROTECTED]> wrote: > > > Sorry for the newbie question, but how can tar be configured so > > that after restoring a full and an incremental the filesystem has > > exactly the files at the time of the incremental, not any files > > which were present during the full but removed before the incremental. > > Use -G at restore time. From the GNU tar manual: > >`--incremental' (`-G') in conjunction with `--extract' (`--get', > `-x') causes `tar' to read the lists of directory contents previously > stored in the archive, _delete_ files in the file system that did not > exist in their directories when the archive was created, and then > extract the files in the archive. > >This behavior is convenient when restoring a damaged file system from > a succession of incremental backups: it restores the entire state of > the file system to that which obtained when the backup was made. If > `--incremental' (`-G') isn't specified, the file system will probably > fill up with files that shouldn't exist any more. Is this an option amrecover can (or does) pass to tar? Thanks, by the way, for the words to the wise on "dump". -- - Dan Wilder <[EMAIL PROTECTED]> Technical Manager & Editor SSC, Inc. P.O. Box 55549 Phone: 206-782-8808 Seattle, WA 98155-0549URL http://embedded.linuxjournal.com/ -
Re: switch to tar
On May 17, 2001, [EMAIL PROTECTED] wrote: > could it be possible, that amanda can't overwrite tapes with "tar" > that have been previously written with "dump" ? Nope. The error message generally means tape full or media error. Is there any message in the e-mail log specifying how much data taper managed to get into the tape? Does it match your expectations? -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, "Anthony A. D. Talltree" <[EMAIL PROTECTED]> wrote: >> But this is not a problem of linux, it's a problem of dump. > Yeah yeah yeah. We've all heard that a million times before, and yet on > real systems we've been happily using dump for years without > consequences. You've been lucky for years. Some day, it'll bite you, and you'll regretfully remember this discussion. Or perhaps you'll keep on being lucky. Good luck :-) > A more fitting analogy would be GM making cars out of cardboard and > warning people to not leave them out in the sun because they might catch > fire. So your point is that dump should refuse to run on a mounted filesystem? Yeah, that sounds reasonable to me. Then more people would learn about its limitations and switch to some saner backup tool. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, "C. Chan" <[EMAIL PROTECTED]> wrote: > I followed suggestions to remove ext2 dump so > Amanda would detect xfsdump and recompiled, but I find this rather inelegant. What is inelegant? Removing ext2 dump? You didn't have to do that. You only need xfsdump available at configure time to get the xfsdump supporting bits enabled; the existence of ext2 dump doesn't make a difference. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
On May 17, 2001, Ron Stanonik <[EMAIL PROTECTED]> wrote: > Sorry for the newbie question, but how can tar be configured so > that after restoring a full and an incremental the filesystem has > exactly the files at the time of the incremental, not any files > which were present during the full but removed before the incremental. Use -G at restore time. From the GNU tar manual: `--incremental' (`-G') in conjunction with `--extract' (`--get', `-x') causes `tar' to read the lists of directory contents previously stored in the archive, _delete_ files in the file system that did not exist in their directories when the archive was created, and then extract the files in the archive. This behavior is convenient when restoring a damaged file system from a succession of incremental backups: it restores the entire state of the file system to that which obtained when the backup was made. If `--incremental' (`-G') isn't specified, the file system will probably fill up with files that shouldn't exist any more. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: Linux and dump
Sorry for the newbie question, but how can tar be configured so that after restoring a full and an incremental the filesystem has exactly the files at the time of the incremental, not any files which were present during the full but removed before the incremental. Thanks, Ron Stanonik [EMAIL PROTECTED]
Re: Linux and dump
On Thu, 17 May 2001 at 8:50am, Anthony A. D. Talltree wrote > >Yikes! A troll! > > Nope, just a naked emperor. This Needs To Stop. Now. I have had enough of anti-Linux flamefests, and I especially do not want to see one on a mailing list with one of the hightest SNRs I've ever seen. If you want to continue this with anybody (and no, I am not interested) take it off list. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Linux and dump
On Thu, 17 May 2001 at 11:09am, C. Chan wrote > I made some backups with Amanda using GNU tar and xfsdump and the recovery > under both was consistent. I followed suggestions to remove ext2 dump so > Amanda would detect xfsdump and recompiled, but I find this rather inelegant. > I plan to make a JFS partition as well, so this machine will > have multiple dumps, and I'd like to be able to specify which dump to > use for a filesystem in my amanda.conf file. > As John R. Jackson has pointed out, amanda "should" discover fs type and use the proper dump program (given that it was detected at configure time). Multiple filesystems on one box should not be a problem. If they are, it's a bug and should be reported as such. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: Linux and dump
Also Sprach Anthony A. D. Talltree: > >Yikes! A troll! > > Nope, just a naked emperor. > No, a nakked emperor penguin. -- C. Chan < [EMAIL PROTECTED] > Finger [EMAIL PROTECTED] for PGP public key.
Re: Linux and dump
Also Sprach Dan Wilder: > On Thu, May 17, 2001 at 07:51:09AM -0700, Anthony A. D. Talltree wrote: > > >Summary: "Dump was a stupid program in the first place. Leave it behind." > > > > What it really means: "Linux is a toy system and rather than fix our > > design flaws we'll play sour grapes." > > > > Yikes! A troll! > (Well, this stuff isn't all that related to Amanda but...) *Shrug*. Troll or not, the fact is that some of us have to work with Linux systems and get them backed up. And to be honest XFS support is a big factor in choosing Linux over other free OSes like *BSD because now we don't have to format ~TB worth of IRIX disk when moving them to a new platform. Linux may have not be close to IRIX in terms of stability or scalability but the siren song of commodity PC hardware vitiates all logic. I am not certain whether Torvalds was referring to ext2 dump or block level backup in general. The decision to use the page cache rather than buffer cache may be different from what BSD or SysV have done, but I don't think it is totally without merit. In exchange for greater difficulties in writing utilities like dump, there is the supposed advantage of simplifying the filesystem programming API and so on. Also SGI's port (albeit with numerous patches) of XFS and presumably the IBM JFS port demonstrate that you can have a dump-like util for journaled filesystems under Linux. As for my XFS progress on Linux: I downloaded the SGI patched 2.4.3 kernel and utils and installed it on a box running a Mandrake 8.0 installation. I also had to grab the updated quota-utils from SourceForge. I made an XFS filesystem, copied over some stuff, started some r/w to files, did the power on/off and it seemed to recover properly. I applied quotas and went over quota and the behavior was as expected. I made some backups with Amanda using GNU tar and xfsdump and the recovery under both was consistent. I followed suggestions to remove ext2 dump so Amanda would detect xfsdump and recompiled, but I find this rather inelegant. I plan to make a JFS partition as well, so this machine will have multiple dumps, and I'd like to be able to specify which dump to use for a filesystem in my amanda.conf file. -- C. Chan < [EMAIL PROTECTED] > Finger [EMAIL PROTECTED] for PGP public key.
Re: Linux and dump
>Dump bypasses the filesystemlevel to access the data and therefor only >works reliable if all caches are flushed to disk. This is only garanteed >if the filesystem is unmounted or at least mounted read-only. Yes, I know. I learned that 15 years ago. >But this is not a problem of linux, it's a problem of dump. Yeah yeah yeah. We've all heard that a million times before, and yet on real systems we've been happily using dump for years without consequences. >what you are saying is the same as if you try to change a wheel >of your car without making shure it won't roll away while working, >and when it rolls away an breaks your foot saying "what a damn bad >car..." A more fitting analogy would be GM making cars out of cardboard and warning people to not leave them out in the sun because they might catch fire.
Re: Linux and dump
"Anthony A. D. Talltree" schrieb: > > >Summary: "Dump was a stupid program in the first place. Leave it behind." > > What it really means: "Linux is a toy system and rather than fix our > design flaws we'll play sour grapes." wrong. what it realy means is: dont use a loaded rifle as a walkingstick, you might shoot your foot. dump is ok with linux if you use it on an unmounted filesystem. everything other is abuse. Dump bypasses the filesystemlevel to access the data and therefor only works reliable if all caches are flushed to disk. This is only garanteed if the filesystem is unmounted or at least mounted read-only. But this is not a problem of linux, it's a problem of dump. what you are saying is the same as if you try to change a wheel of your car without making shure it won't roll away while working, and when it rolls away an breaks your foot saying "what a damn bad car..." Christoph
Re: Linux and dump
>Yikes! A troll! Nope, just a naked emperor.
Re: Linux and dump
On Thu, May 17, 2001 at 07:51:09AM -0700, Anthony A. D. Talltree wrote: > >Summary: "Dump was a stupid program in the first place. Leave it behind." > > What it really means: "Linux is a toy system and rather than fix our > design flaws we'll play sour grapes." > Yikes! A troll! -- - Dan Wilder <[EMAIL PROTECTED]> Technical Manager & Editor SSC, Inc. P.O. Box 55549 Phone: 206-782-8808 Seattle, WA 98155-0549URL http://embedded.linuxjournal.com/ -
Re: Linux and dump
>Summary: "Dump was a stupid program in the first place. Leave it behind." What it really means: "Linux is a toy system and rather than fix our design flaws we'll play sour grapes."
switch to tar
Hi, I' am using amanda 2.4.2p2 for quite some weeks without any problems. Now I want to switch from using dump program to using gnutar instead. The only thing I did was to add an entry to my dumptype definition: define dumptype always-full { global program "GNUTAR" priority high dumpcycle 0 } amcheck still runs fine, but amdump now produces ERRORS: /amdump taper: writing end marker. [XXX ERR kb 6976 fm 5] driver: result time 45.085 from dumper0: FAILED 01-00010 ["data write: Broken pipe"] driver: result time 45.085 from taper: TAPE-ERROR 00-9 [writing file: Input/output error] FAILURE AND STRANGE DUMP SUMMARY: client dirpath lev 0 FAILED [out of tape] client dirpath lev 0 FAILED [dump to tape failed] client dirpath lev 0 FAILED ["data write: Broken pipe"] for me it looks as if the tape is full, but it's always been big enough before.. could it be possible, that amanda can't overwrite tapes with "tar" that have been previously written with "dump" ? any hints ? thnx mostart
Re: Linux and dump
On Thu, 17 May 2001, Jonathan Dill wrote: > I'm planning to migrate to SGI XFS on Linux--SGI has released an > installer CD for Red Hat 7.1 which can make XFS filesystems. XFS is a > journaled filesystem, and it can be run over RAID unlike ext3 which had > problems with RAID on 2.2 kernel. You can download the installer for > free from ftp://linux-xfs.sgi.com but the server appears to be down > right now. I'll drop a few lines on this. The graphical cdrom installer works just nice, but I usually do my installs from an http server set up for that. Trying the netboot floppy on the CD and expert mode (allows selecting media) IIRC let me use the network install but *do not have RAID creation* in them. The graphic cdrom install lets me have RAID (very easy) but not network install =). I don't know if this makes a difference and it's definately not amanda-specific, but thought I'd mail it. If only it let me define pv's and LVM at the same time =) -- "It's seems to be spreading faster as Anna Kournikova" -- Mikko Hyyppönen on VBSWG.X (fsecure.com)
Re: Linux and dump
Hi Eric, You may want to take a look through the list archives at: http://groups.yahoo.com/group/amanda-users/ This subject has already been hashed and rehashed to death on just about every mailing list that I subscribe to including this one. I'm planning to migrate to SGI XFS on Linux--SGI has released an installer CD for Red Hat 7.1 which can make XFS filesystems. XFS is a journaled filesystem, and it can be run over RAID unlike ext3 which had problems with RAID on 2.2 kernel. You can download the installer for free from ftp://linux-xfs.sgi.com but the server appears to be down right now. Eric Veldhuyzen wrote: > I just saw that someone had problems with dump and Linux. This made > me remember an posting from Linux Torvalds of a few weeks back which I > think anyone still using dump with Linux should read: > >http://www.lwn.net/2001/0503/a/lt-dump.php3 > > Summary: "Dump was a stupid program in the first place. Leave it behind." -- "Jonathan F. Dill" ([EMAIL PROTECTED])
amandad: error receiving message: Connection refused
Hello ! I have a amanda server wich backups 5 - 10 clients over a firewall (NAT) sometimes backup works but sometimes i get messages like this amdump: setting up estimates for asta:hda2 asta:hda2 overdue 421 days for level 0 setup_estimate: asta:hda2: command 0, options: last_level 1 next_level0 -421 level_days 1 getting estimates 0 (149830) 1 (32990) 2 (20650) setting up estimates for asta:sda1 asta:sda1 overdue 425 days for level 0 setup_estimate: asta:sda1: command 0, options: last_level 1 next_level0 -425 level_days 5 getting estimates 0 (1397450) 1 (111080) 2 (117650) then error result for host asta disk sda1: Request to asta timed out. error result for host asta disk hda2: Request to asta timed out. (client) amandad.debug : got packet: Amanda 2.4 REQ HANDLE 000-A8970708 SEQ 990101241 SECURITY USER amanda SERVICE selfcheck OPTIONS ; GNUTAR sda1 0 OPTIONS |;bsd-auth;compress-fast;index;exclude-file=/etc/exclude.gtar; GNUTAR hda2 0 OPTIONS |;bsd-auth;compress-fast;index;exclude-file=/etc/exclude.gtar; sending ack: Amanda 2.4 ACK HANDLE 000-A8970708 SEQ 990101241 amandad: running service "/usr/local/amanda/libexec/selfcheck" amandad: error receiving message: Connection refused Does somebody know whats the problem? Thanks in advance Mirek
Linux and dump
Hi, I just saw that someone had problems with dump and Linux. This made me remember an posting from Linux Torvalds of a few weeks back which I think anyone still using dump with Linux should read: http://www.lwn.net/2001/0503/a/lt-dump.php3 Summary: "Dump was a stupid program in the first place. Leave it behind." -- #!perl # Life ain't fair, but root passwords help. # Eric Veldhuyzen [EMAIL PROTECTED] $!=$;=$_+(++$_);($:,$~,$/,$^,$*,$@)=$!=~ # Perl Monger /.(.)...(.)(.)(.)..(.)..(.)/;`$^$~$/$: $^$*$@$~ $_>&$;`
Re: oh oh...
On Wed, May 16, 2001 at 09:54:20PM -0500, John R. Jackson wrote: [...] > Or you could switch to GNU tar (but check the mailing list for the issues > that brings up). I would advise that. I really, really don't trust dump. > You might also make sure you're up to date on dump version. A lot of good > things have happened to it in the last several months (so I've heard). >From the Linux Weekly News (2001/05/03): > Trashing your filesystem with dump. It has been known for a very long > time that using dump to back up live filesystems can result in corrupt > backups. It turns out that, with Linux kernels through 2.4.4, dumping a > live filesystem has the potential to corrupt the filesystem in place, > even if the dump process has no write access. Let me repeat again: I really, really don't trust dump. -- #!perl # Life ain't fair, but root passwords help. # Eric Veldhuyzen [EMAIL PROTECTED] $!=$;=$_+(++$_);($:,$~,$/,$^,$*,$@)=$!=~ # Perl Monger /.(.)...(.)(.)(.)..(.)..(.)/;`$^$~$/$: $^$*$@$~ $_>&$;`