Re: FUll Backups and Holding Disk
>Will amanda use the holding disk for temp storage of full backups? Yes. If Amanda drops into degraded mode (e.g. tape error), then whether it puts full dumps in the holding disk is based on the "reserve" value. The default (100%, meaning "reserve the entire holding disk for incrementals) does not allow full dumps. Setting it to a small value (e.g. zero), will. >Reid John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
FUll Backups and Holding Disk
Greetings, The information I have been able to read on this has been ambiguous, so my question is: Will amanda use the holding disk for temp storage of full backups? One of my hosts is painfully slow at feeding data, and I hate to have my DAT running on and off for 8 hours rather than just streaming the data to tape in less than 1/4 of that time. TIA, Reid =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Personal Web Page: http://dpsi4.org/~reidm
Re: NFS performance question
> Hello all. I've been using amanda to do all our backups for more than a > year now and love it. I'm writing to hear what the list has to say about > using amanda to backup a significant amount of data via an NFS mount. > > Background: I'm looking at deploying a SNAP! server or other such NAS > device for internal network drive use. I would have roughly 100GB of > data mounted to one of my Linux boxen via NFS off the snap server. I > imagine that the incrementals would only be grabbing a few hundered > megs, but obviously level0 backup of 100+G is significant! I've done something similar for several months (NetApp Filer) via NFS and GNU tar to split up the hundreds of GB into nice little chunks. The performance was ok. In best cases, I got something around 7 MB/s.
NFS performance question
Hello all. I've been using amanda to do all our backups for more than a year now and love it. I'm writing to hear what the list has to say about using amanda to backup a significant amount of data via an NFS mount. Background: I'm looking at deploying a SNAP! server or other such NAS device for internal network drive use. I would have roughly 100GB of data mounted to one of my Linux boxen via NFS off the snap server. I imagine that the incrementals would only be grabbing a few hundered megs, but obviously level0 backup of 100+G is significant! The box running the amanda server is an Athlon 700/256MB RAM/HP DLT1 40/80GB tapes/U160 Adaptec SCSI adaptor. I currently have the holding disk on the IDE chain, and realize that alone is a potential bottleneck. RedHat 7.1 (2.4.2-2 No 2 gig filesize limits!!) So the question is, how well does amanda, or more specifically dump/tar handle a backup of this type. The wire speed is 100MB Full duplex. I'm really trying to get a feel for how much of a headache this will potentially be, and am hoping that some of you are doing similair things that might provide me with a metric for this project. I haven't purchased the SNAP box yet, and won't if the response is that this is not feasable. TIA to all responders. -- Matthew Boeckman(816) 777-2160 Manager - Systems Integration Saepio Technologies Go away, or I will replace you with a small shell script.
Re: transfer amanda from SunOS to linux
Hi, I'd missed the part of 'running the client' in the document. I started the client and got rid of the timeout message. 'mt' fixed the block size problem. Also I realized that I left the labels in the the config file as default (^VOL...) but the tapelist says (^FULL...). Now amanda asks for the right tape. Thank you very much. Murat Okyar "John R. Jackson" wrote: > > >[amanda@coltrane /]$ amcheck deciFull > >Amanda Tape Server Host Check > >- > >Holding disk /usr/tmp: 194380 KB disk space available, > >that's plenty > >st0: Incorrect block size. > >ERROR: /dev/st0: reading label: Input/output error > > (expecting a new tape) > > First, that's probably the wrong device name. It should be /dev/nst0 > (i.e. the "non-rewinding" name). > > The "incorrect block size" probably means you wrote the tapes on the > Sun box with hardware blocking enabled (whether you knew it or not) > and the Linux box is either set up for no hardware blocking or for a > different hardware block size. > > I think there is an "mt" option on Linux for setting the hardware > blocksize, something like "mt -f /dev/nst0 blocksize 0" (but look it up > in the docs). Once you have it set to "variable" (zero), do this: > > mt -f /dev/nst0 rewind > dd if=/dev/nst0 bs=32k count=1 | wc -c > > I'm guessing you'll get a block that is either 512 bytes or 1024 instead > of 32768. > > The next question is what to do about it. You could set the Linux > blocksize value to match, and that would be OK. Or you could re-amlabel > each tape before use with blocksize zero and just remember to reset the > blocksize on the "old" tapes until they get reset. > > >Why doesn't amanda look at the tapelist and instead > >asks for a new tape? > > How many tapelist entries do you have? What is tapecycle set to in > your amanda.conf? What does "amadmin tape" say? > > >'selfcheck request timed out. Host down?' warning > >didn't make sense to me. 'coltrane is the machine I am > >running amanda and the tape is connected to it. Also I > >want to back-up only the disks on this machine. > > Did you do all the client side setup (even though this is your server > machine)? In particular, the inetd/xinetd stuff? Since I just looked > them up for someone else and they're handy, here are the two FAQ items > about this: > > http://amanda.sourceforge.net/fom-serve/cache/16.html > http://amanda.sourceforge.net/fom-serve/cache/140.html > > >Murat Okyar > > John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Problems with amrecover/index/amcheck after tape problems
>This is the content of our tapelist file. I think it should have more than >one line?!? > >20010725 NEURO007 reuse Ooops. You've lost your tapelist file, which is a very bad thing. That's why Amanda is asking about new tapes and complaining about backups being overwritten. It's also why I make a backup of it (and a lot of other critical Amanda files) before each run and save several copies, just in case. You may be able to put it back together, though. I just tried the following ksh code. It finds the taper START line in each log.MMDD.NN file and rebuilds the tapelist file from that. rm tapelist.log cat log.* | grep '^START taper' | while read x x x datestamp x label x do echo $datestamp $label reuse >> tapelist.log done sort -rn < tapelist.log > tapelist.new At this point, you will hopefully have 25 tapes listed in tapelist.new. Look through it and make sure thing seem right. In particular, make sure a tape is not listed twice. Check your current tapelist file. It should be owned by your Amanda user and mode 0600. Move it out of the way and copy tapelist.new to tapelist, setting the ownership and mode. Finally, try "amadmin tape" again and see if it's happier. >This is what amrecover tells me after start up and changing to the disk in >question: ... However, I'm worried that you've also lost your log.MMDD.NN files. That would explain why amrecover is mis-behaving. And if you've lost them, it's going to be harder to rebuild the tapelist file. You might be able to do it if you still have the amdump.NN files (basically with the code from above, altered a bit to match the different file format). But without the log.MMDD.NN files, amrecover is not going to work. So, do you have the log.MMDD.NN files? If not, are they on a backup tape that could be restored? You might also look in the "oldlog" directory. If Amanda got rid of them, they should be in there. >... The content of 20010719_1.gz looks like the >following lines and I think they are ok?!? ... Yes, that looks fine (at least one thing is working right for you :-). >We are using amanda since March. Sometimes amcheck had problems accessing the >tape drive but it never had influence on the nightly backup. Until this week >:-( I think something bad happened to the directory that has your tapelist, and possibly the log.MMDD.NN files. But it's unlikely Amanda did it. More like somebody goofed with a "find ... rm", "rdist" or something like that. >Even if the index should be lost I should be able to restore old files using >amrestore?!? Yes. >Christina Rossmanith John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amtape show
>I removed the "commented out" lines but that didn't fix the problem. ... That's not too surprising. >So, if you can send a debugging version of the script, I'd appreciate it. The following (untested) patch will gather more debugging information to a file named chg-zd-mtx.$$ in your Amanda temp directory (e.g. /tmp/amanda). Note that this will *not* fix your problem -- this is purely information gathering. Save a copy of your current chg-zd-mtx.sh.in. Apply the patch. Run ./config.status to create a new chg-zd-mtx.sh from the .in file. Cd to changer-src and run "make". Then either run "make install" or copy chg-zd-mtx to the installation location (make sure it ends up mode 755). Finally, try your test(s) again and look through the chg-zd-mtx.$$ debug file to find out what program is getting the bus error, or send the file to me and I'll look at it. It's possible this patch will change the way your test behaves since it redirects stderr to a file. It's also possible the patch won't work at all. It's untested (I don't use mtx here). If it goofs, give me the details and I'll get you a new version. >Pam Miller John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED] chg-zd-mtx.debug.diff
Re: compiling on aix 4.3.3
>The problem is probably a missing mnttab.h. configure tells me: > >checking for mntent.h... yes >checking for mnttab.h... no No, the problem is that AIX is now releasing a broken mntent.h (that file didn't use to exist). And they don't document the interfaces to the new routines that use it. G. None of our systems have been upgraded to this mess, yet. When one of them is, I'll try to come up with a real fix. One workaround is to move the AIX provided mntent.h out of the way. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: moving the server between machines
>Assuming I keep the same configure parameters while compiling amanda on >the new server as the one found on the old server. Are there any special >things I need to do in order to move amanada backup server from one >machine to another? Will it be enough to move the amanda file (conf and >database)? You also need to move the tapelist file. If you use amrecover, you'll also need the log.MMDD.NN files and the index directory tree. > Paolo John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: example files
>Does anyone know where I can get a copy of the amanda example conf >files? Go to www.amanda.org and download the source tar file. Or look in the FAQ (also at www.amanda.org) for how to get the files from CVS. >Dan John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: How to do Incrementals
Its been said repeatedly on this list, that one of the hardest things to learn about amanda, is to trust it. We've been using it in our shop for years and it has *never* let us down. It will tell you through the reports it generates if you are asking too much of it so you can adjust the configuration, add tapes, whatever.
RE: How to do Incrementals
Dan, Why? Balancing incrementals and fulls across the entire dumpcycle is a feature. It reduces your risk (what if the full backup tape goes bad and you need to restore?) and allows you to get more backups in the same number of tapes. It allows you to just put in the next tape, instead of worrying about which set of tapes to pull from. It allows you to add more tapes to the set and have them used efficiently. Now, having heard all that, if you still really want to do it, there is a way. You will need two separate backup configurations sharing a common database. Configure one to always do full backups, and the other to do incrementals only. (Search the list archives for "incr-only" if you need more info.) Good Luck, Paul > -Original Message- > From: Dan Smith [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, July 25, 2001 8:04 AM > To: [EMAIL PROTECTED] > Subject: How to do Incrementals > > > OK, right now, I do a full backup every night. That's > obviously not going > to work forever ;). How can I run a full backup once a week and > incrementals every day? > > Right now, my dumpcycle is set at 0 for everything. That > makes it do a full > backup every night, right? So, what do I need to set the > dumpcycle to in > order for it to run incremental? If dumpcycle isn't where I > do it, then can > someone help me out? > > Also, I thought I read somewhere that amanda would try to balance > incrementals throughout the cycle or something like that. I > don't want it > to. I just want a full backup on the weekend and an > incremental every night. > > Thanks! > > --Dan >
Re: How to do Incrementals
On Wed, 25 Jul 2001 at 8:03am, Dan Smith wrote > OK, right now, I do a full backup every night. That's obviously not going > to work forever ;). How can I run a full backup once a week and > incrementals every day? > > Right now, my dumpcycle is set at 0 for everything. That makes it do a full > backup every night, right? So, what do I need to set the dumpcycle to in > order for it to run incremental? If dumpcycle isn't where I do it, then can > someone help me out? dumpcycle 1 week runspercycle 5 Which assumes backups only on weeknights. Make runspercycle 6 if you want one job over the weekend as well. > Also, I thought I read somewhere that amanda would try to balance > incrementals throughout the cycle or something like that. I don't want it > to. I just want a full backup on the weekend and an incremental every night. As has been discussed *many* times on this list, amanda works best when you let it decide when to do the backups (which, granted, is tough for all of us who are used to diligently planning the backup schedule). You *can* force amanda to do fulls only on the weekend (via amadmin force and/or separate configs), but it's really more trouble than it's worth. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amtape show
Hi, On Tue, 24 Jul 2001, John R. Jackson wrote: > >... Here's my changer.conf ... > > Unfortunately, the changer script does not understand comments in the > config file. So those lines you have "commented out" are actually > being processed. You need to remove them. > > I don't know that that is having anything to do with your problem, > but it's certainly not helping :-). > > If it doesn't fix it, let me know and I'll send you a debugging version > of the script that logs every step it takes so we can figure out where > the bus error (signal 10) is coming from. > I removed the "commented out" lines but that didn't fix the problem. So, if you can send a debugging version of the script, I'd appreciate it. Thanks, Pam Miller
How to do Incrementals
OK, right now, I do a full backup every night. That's obviously not going to work forever ;). How can I run a full backup once a week and incrementals every day? Right now, my dumpcycle is set at 0 for everything. That makes it do a full backup every night, right? So, what do I need to set the dumpcycle to in order for it to run incremental? If dumpcycle isn't where I do it, then can someone help me out? Also, I thought I read somewhere that amanda would try to balance incrementals throughout the cycle or something like that. I don't want it to. I just want a full backup on the weekend and an incremental every night. Thanks! --Dan
Re: moving the server between machines
> Assuming I keep the same configure parameters while compiling amanda on > the new server as the one found on the old server. Are there any special > things I need to do in order to move amanada backup server from one > machine to another? Will it be enough to move the amanda file (conf and > database)? You'll have to add this other server to .amandahosts on the clients.
Re: compiling on aix 4.3.3
On AIX 4.3.3 without compability libs for older AIX versions: > /usr/bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I../config >-I../common-src-g -O2 -c getfsent.c > gcc -DHAVE_CONFIG_H -I. -I. -I../config -I../common-src -g -O2 -c getfsent.c -o >getfsent.o > getfsent.c: In function `open_fstab': > getfsent.c:154: `MNTTAB' undeclared (first use in this function) > getfsent.c:154: (Each undeclared identifier is reported only once > getfsent.c:154: for each function it appears in.) > make[1]: *** [getfsent.lo] Error 1 > make[1]: Leaving directory `/tmp/amanda-2.4.2p2/client-src' > make: *** [all-recursive] Error 1 The problem is probably a missing mnttab.h. configure tells me: checking for mntent.h... yes checking for mnttab.h... no config/config.h: /* Define if you have the getmntent function. */ #define HAVE_GETMNTENT 1 /* Define if you have the header file. */ #define HAVE_FSTAB_H 1 client-src/getfsent.c: 48 #if defined(HAVE_FSTAB_H) && !defined(HAVE_MNTENT_H) /* { */ 49 /* 50 ** BSD (GETFSENT_BSD) 51 */ 52 #define GETFSENT_TYPE "BSD (Ultrix, AIX)" 53 54 #include 55 56 int open_fstab() 57 { 58 return setfsent(); 59 } 60 61 void close_fstab() 62 { 63 endfsent(); 64 } 65 66 This part is not entered because of !defined. Instead some Linux/HP-UX/Irix part wants to be compiled: 137 # if defined(HAVE_MNTENT_H) /* } { */ 138 139 /* 140 ** System V.3 (GETFSENT_SVR3, GETFSENT_LINUX) 141 */ 142 #define GETFSENT_TYPE "SVR3 (NeXTstep, Irix, Linux, HP-UX)" 143 144 #include 145 146 static FILE *fstabf1 = NULL;/* /proc/mounts */ 147 static FILE *fstabf2 = NULL;/* MNTTAB */ 148 149 int open_fstab() 150 { 151 close_fstab(); 152 #if defined(HAVE_SETMNTENT) 153 fstabf1 = setmntent("/proc/mounts", "r"); 154 fstabf2 = setmntent(MNTTAB, "r");
Re: compiling on aix 4.3.3
On Fri, Jul 06, 2001 at 01:45:26PM -0500, John R. Jackson wrote: > >I've been trying all day to compile the latest stable release of amanda on aix > > 4.3.3. ... > > I do this pretty often with no problem. Today, I'm running into the same problem using AIX 4.3.3: /usr/bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I../config -I../common-src-g -O2 -c getfsent.c gcc -DHAVE_CONFIG_H -I. -I. -I../config -I../common-src -g -O2 -c getfsent.c -o getfsent.o getfsent.c: In function `open_fstab': getfsent.c:154: `MNTTAB' undeclared (first use in this function) getfsent.c:154: (Each undeclared identifier is reported only once getfsent.c:154: for each function it appears in.) make[1]: *** [getfsent.lo] Error 1 make[1]: Leaving directory `/tmp/amanda-2.4.2p2/client-src' make: *** [all-recursive] Error 1 client-src/getfsent.c: 139 /* 140 ** System V.3 (GETFSENT_SVR3, GETFSENT_LINUX) 141 */ 142 #define GETFSENT_TYPE "SVR3 (NeXTstep, Irix, Linux, HP-UX)" 143 144 #include 145 146 static FILE *fstabf1 = NULL;/* /proc/mounts */ 147 static FILE *fstabf2 = NULL;/* MNTTAB */ 148 149 int open_fstab() 150 { 151 close_fstab(); 152 #if defined(HAVE_SETMNTENT) 153 fstabf1 = setmntent("/proc/mounts", "r"); 154 fstabf2 = setmntent(MNTTAB, "r"); 155 #else 156 fstabf2 = fopen(MNTTAB, "r"); 157 #endif 158 return (fstabf1 != NULL || fstabf2 != NULL); 159 } Well, this system isn't IRIX or Linux...
Re: Problems with amrecover/index/amcheck after tape problems
ove). > Also, if you're using GNU tar, make sure the files are formatted properly. > If you look at the first few lines and they start with a large number, it > means you're using a broken version of GNU tar. You'll need to upgrade > to 1.13.19 (alpha.gnu.org), and those index files are pretty much junk > (unless you want to strip the leading number off of each line). I've already recovered files earlier successfully. That was a few months ago, when we started with amanda. Thus I think it should not be a tar problem. And tar wasn't exchanged since then. The content of 20010719_1.gz looks like the following lines and I think they are ok?!? / /bin/ /lib/ /lib/Netlogon/ /lib/Netlogon/Win95/ /lib/Netlogon/WinNT/ /lib/Profiles/ /lib/Profiles/ahipke/ /lib/Profiles/ahipke/WinNT/ > Make sure all the directories and files are readable by the Amanda user > (the one that runs amindexd from inetd/xinetd). This is ok. > Next, run "amadmin find " and make sure it finds > backups from the date you're interested in back through a full dump. > > >The backups were flushed to the tape NEURO006 and I would have expected > >that amanda requests tape NEURO007 next, but it tells me that it would > >like a new tape. ... > > If Amanda asks for a new tape, it means the number of tapes in your > tapelist file is less than your tapecycle value. > > >Am I right that I would have been informed if one tape > >were not enough for amflush?!? > > One way or another. If you have a tape changer configured, amflush > would have automatically advanced to another tape (up to runtapes), > just like amdump. > > If you don't and amflush ran into end of tape, it would have reported > an error and told you it left some images in the holding disk. > > If your holding disk is now empty, then the amflush probably worked. Yes it is empty so this should be fine... > Exactly what happened should have been documented in the amflush E-mail. It just told me that everything was fine and that it expects a new tape what sound strange for me :-( > >Something else is strange: after amflush I tried amcheck still having > >tape NEURO006 in the tape drive. And amcheck was happy. It was happy > >with NEURO007 as well. > > That seems very wrong. > > Are you sure amflush did anything to NEURO006? If you run: > > amadmin find > > (where "some-client" is a client you know was flushed) does it show > that tape? Were any errors reported in the amflush E-mail? > > >Could the problems accessing the tape drive during the last days have > >left some corrupt info files?!? > > Not likely. > > >I would really be glad to get some advice where to look and what to do! > > Take a very close look at your tapelist file and at your tapecycle value. This is the content of our tapelist file. I think it should have more than one line?!? 20010725 NEURO007 reuse And we have set tapecycle to 25 tapes. > Also, "amadmin tape" is a handy way to see what tape(s) Amanda > expects to use next. It tells me: The next Amanda run should go onto a new tape. We are using amanda since March. Sometimes amcheck had problems accessing the tape drive but it never had influence on the nightly backup. Until this week :-( I hope that I have answered all questions detailed enough... Even if the index should be lost I should be able to restore old files using amrestore?!? Christina Rossmanith
moving the server between machines
Hi Assuming I keep the same configure parameters while compiling amanda on the new server as the one found on the old server. Are there any special things I need to do in order to move amanada backup server from one machine to another? Will it be enough to move the amanda file (conf and database)? Paolo
Re: xfsdump estimates sometimes fail
> >on a particular host (Linux 2.4.6-prexy-xfs, RH 7.1 + LVM + XFS, Amanda > >2.4.2p2) sometimes sendsize fails to estimate sizes with xfsdump. I recognized kernel oopses in syslog when Amanda tried to backup this host (maxdumps=3): Jul 24 00:45:28 somehost kernel: Unable to handle kernel NULL pointer dereference at virtual address 0008 Jul 24 00:45:28 somehost kernel: printing eip: Jul 24 00:45:28 somehost kernel: c018debf Jul 24 00:45:28 somehost kernel: *pde = Jul 24 00:45:28 somehost kernel: Oops: Jul 24 00:45:28 somehost kernel: CPU:0 Jul 24 00:45:28 somehost kernel: EIP:0010:[xfs_itobp+303/464] Jul 24 00:45:28 somehost kernel: EIP:0010:[] Jul 24 00:45:28 somehost kernel: EFLAGS: 00010246 Jul 24 00:45:28 somehost kernel: eax: ebx: 0010 ecx: 0282 edx: dce9fadc Jul 24 00:45:28 somehost kernel: esi: edi: ef0ae800 ebp: esp: dce9fad0 Jul 24 00:45:28 somehost kernel: ds: 0018 es: 0018 ss: 0018 Jul 24 00:45:28 somehost kernel: Process xfsdump (pid: 20983, stackpage=dce9f000) Jul 24 00:45:28 somehost kernel: Stack: 005422a0 e5085e40 0286 0002 a845a000 0202 Jul 24 00:45:28 somehost kernel:005422a0 0010 00028454 0202 e44bf234 e5085e40 Jul 24 00:45:28 somehost kernel:e44bf234 dce9fc08 c0192fb5 ef0ae800 e44bf234 Jul 24 00:45:28 somehost kernel: Call Trace: [xfs_bulkstat+2293/3104] [xfs_ioctl+1615/4768] [xfs_bulkstat_one+0/1344] [_xfs_imap_to_bmap+55/688] [xfs_bmbt_get_state+37/48] [avl_remove+201/224] [__alloc_pages+116/640] Jul 24 00:45:28 somehost kernel: Call Trace: [] [] [] [] [] [] [] Jul 24 00:45:28 somehost kernel:[do_anonymous_page+54/144] [update_process_times+32/128] [update_wall_time+22/80] [timer_bh+36/592] [__delete_from_swap_cache+126/144] [handle_IRQ_event+58/112] [tasklet_hi_action+137/176] [__alloc_pages_limit+112/160] Jul 24 00:45:28 somehost kernel:[] [] [] [] [] [] [] [] Jul 24 00:45:29 somehost kernel:[__alloc_pages+191/640] [do_anonymous_page+54/144] [do_no_page+48/192] [handle_mm_fault+97/208] [__up_wakeup+8/20] [stext_lock+3185/9300] [_end_pagebuf_page_io_multi+253/272] [dump_thread+37/288] Jul 24 00:45:29 somehost kernel:[] [] [] [] [] [] [] [] Jul 24 00:45:29 somehost kernel:[linvfs_ioctl+45/64] [dump_thread+37/288] [dump_thread+37/288] [sys_ioctl+375/400] [dump_thread+37/288] [system_call+51/56] [dump_thread+37/288] Jul 24 00:45:29 somehost kernel:[] [] [] [] [] [] [] Jul 24 00:45:29 somehost kernel: Same occured this morning when I drove sendsize by hand with maxdumps=3 but not with maxdumps=1. Ok, I'll upgrade the kernel from 2.4.6-pre5-xfs to 2.4.7-xfs before any further investigation...
Re: xfsdump estimates sometimes fail
> Sendsize may be run by hand. Look for an amandad*debug file with > SERVICE sendsize. Take all the lines in the packet from the OPTIONS > through the last DUMP line and put them in a temp file someplace. > Then run sendsize by hand **as the Amanda user** with that file as > standard input. Great! > You might try setting maxdumps to 1 in the OPTIONS line of the temp file > to see if that makes any difference. You might also try only listing > the file system that shows the error. It does. Here's sendsize.in OPTIONS maxdumps=3;hostname=some.host.domain.org; DUMP /var/spool/squid 0 1970:1:1:0:0:0 3 DUMP /var/spool/squid 1 2001:7:21:22:54:8 3 DUMP /var/spool/news 0 1970:1:1:0:0:0 2 DUMP /var/spool/news 2 2001:7:19:23:0:46 2 DUMP /var/spool/news 3 2001:7:22:7:31:34 2 DUMP /var/spool 0 1970:1:1:0:0:0 4 DUMP /var/spool 2 2001:7:21:22:59:19 4 DUMP /var/spool 3 2001:7:23:22:59:37 4 DUMP /var 0 1970:1:1:0:0:0 1 DUMP /var 1 2001:7:22:22:47:15 1 DUMP /usr 0 1970:1:1:0:0:0 1 DUMP /usr 1 2001:7:23:23:4:7 1 DUMP / 0 1970:1:1:0:0:0 1 DUMP / 1 2001:7:22:23:0:46 1 And the output: OPTIONS maxdumps=3; /var 0 SIZE 448301 /var 1 SIZE 138171 /var/spool 0 SIZE 4299533 / 0 SIZE 133601 /var/spool 2 SIZE 1393488 /var/spool 3 SIZE 618897 /usr 0 SIZE 1048276 / 1 SIZE 434 /usr 1 SIZE 11155 /var/spool/squid 0 SIZE -1 /var/spool/news 0 SIZE -1 /var/spool/news 2 SIZE -1 /var/spool/squid 1 SIZE -1 /var/spool/news 3 SIZE -1 When I change maxdumps to 1, the sizes are printed correctly: OPTIONS maxdumps=1; / 0 SIZE 133601 / 1 SIZE 434 /usr 0 SIZE 1048276 /usr 1 SIZE 11155 /var 0 SIZE 448456 /var 1 SIZE 138178 /var/spool 0 SIZE 4299727 /var/spool 2 SIZE 1393681 /var/spool 3 SIZE 619091 /var/spool/news 0 SIZE 7356950 /var/spool/news 2 SIZE 593094 /var/spool/news 3 SIZE 427060 /var/spool/squid 0 SIZE 17654653 /var/spool/squid 1 SIZE 541157
Re: xfsdump estimates sometimes fail
> >on a particular host (Linux 2.4.6-prexy-xfs, RH 7.1 + LVM + XFS, Amanda > >2.4.2p2) sometimes sendsize fails to estimate sizes with xfsdump. > > Give the following patch a try. It won't fix anything, but will sort > out the debug file lines so you can tell which process is doing what. this patch applied, I didn't get backups of that host this night... FAILURE AND STRANGE DUMP SUMMARY: some.host. /var/spool/squid lev 0 FAILED [missing result for /var/spool/squid in some.host.domain.org response] some.host. /var/spool/news lev 0 FAILED [missing result for /var/spool/news in some.host.domain.org response] some.host. /var/spool lev 0 FAILED [missing result for /var/spool in some.host.domain.org response] some.host. /var lev 0 FAILED [missing result for /var in some.host.domain.org response] some.host. /usr lev 0 FAILED [missing result for /usr in some.host.domain.org response] some.host. / lev 0 FAILED [missing result for / in some.host.domain.org response] /tmp/amanda holds amandad.xy.debug and sendsize.xy.debug. Here are the contents of sendsize: sendsize: debug 1 pid 7601 ruid 33 euid 33 start time Wed Jul 25 00:45:01 2001 /usr/lib/amanda/sendsize: version 2.4.2p2 sendsize: calculating for amname '/', dirname '/' sendsize: calculating for amname '/usr', dirname '/usr' sendsize: calculating for amname '/var', dirname '/var' sendsize: calculating for amname '/var/spool', dirname '/var/spool' sendsize: calculating for amname '/var/spool/news', dirname '/var/spool/news' sendsize: calculating for amname '/var/spool/squid', dirname '/var/spool/squid' sendsize: pid 7601 finish time Wed Jul 25 00:45:02 2001
example files
Hello, Does anyone know where I can get a copy of the amanda example conf files? The install I have doesn't sceem to have them under /usr/local the etc directory doesn't exist Thanks Dan