Re: planner timeout
Hi Paul, thanks for getting back to me on this. I don't think it is firewall issues because up till now the backup ran fine. Actually it ran fine last night. Nobody was doing any big calculations on it, which could have affected that. Also, so maybe something else strange on the network was going on. Is there a way to affect the 30 sec timeout for the planner's ACK timeout? Cheers, René On Nov 29, 2007, at 10:24 AM, Paul Bijnens wrote: On 2007-11-27 18:13, René Kanters wrote: Hi, I have been running into problems that some of my systems are heavily used for long computations making them somewhat less responsive. Last night I ran into the issue that four systems did not send acknowledgments back to the dumper on time during the planning process: planner: ERROR Request to werner.richmond.edu failed: timeout waiting for ACK I looked into allowing more time for that stage, which I believe etimeout should allow, but my amanda.conf has 'etimeout 600' in it while the planner's debug file ends with: security_seterror(handle=0x3038a0, driver=0xa2a0c (BSD) error=timeout waiting for ACK) security_close(handle=0x3038a0, driver=0xa2a0c (BSD)) planner: time 29.898: pid 3734 finish time Tue Nov 27 00:45:36 2007 suggesting that it still only waits for 30 seconds. planner sends a packet to the client(s) and it expects at least an UDP ACK-packet back within 30 seconds, indicating that the client did receive at least the request. The etimeout is the time that planner will wait for the packet with the different size estimates from the client, which will usually take more than 30 seconds. Am I setting the wrong timeout? So it seems you can't even get a reply back. Firewall issues? -- Paul Bijnens, xplanation Technology ServicesTel +32 16 397.511 Technologielaan 21 bus 2, B-3001 Leuven, BELGIUMFax +32 16 397.512 http://www.xplanation.com/ email: [EMAIL PROTECTED] ** * * I think I've got the hang of it now: exit, ^D, ^C, ^\, ^Z, ^Q, ^^, * * F6, quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, / bye, * * stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt, abort, hangup, * * PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e, kill -1 $$, shutdown, * * init 0, kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop- A, ... * * ... Are you sure? ... YES ... Phew ... I'm out * ** *
planner timeout
Hi, I have been running into problems that some of my systems are heavily used for long computations making them somewhat less responsive. Last night I ran into the issue that four systems did not send acknowledgments back to the dumper on time during the planning process: planner: ERROR Request to werner.richmond.edu failed: timeout waiting for ACK I looked into allowing more time for that stage, which I believe etimeout should allow, but my amanda.conf has 'etimeout 600' in it while the planner's debug file ends with: security_seterror(handle=0x3038a0, driver=0xa2a0c (BSD) error=timeout waiting for ACK) security_close(handle=0x3038a0, driver=0xa2a0c (BSD)) planner: time 29.898: pid 3734 finish time Tue Nov 27 00:45:36 2007 suggesting that it still only waits for 30 seconds. Am I setting the wrong timeout? I am running Amanda version 2.5.1p1 on RH WS4 systems, with the backup server a Mac OS X box. Any help is appreciated. Cheers, René
Re: GNUTAR hanging
Hi Anthony, I have been having issues of gnutar hanging also. In my case I think is has to do with hardware issues with the external Western Digital 1TB drive to which I do the backups. I usually end up having to kill the process on the client, reboot the server and make sure that the HD is rebooted too. Then I need to relabel the slot that got messed up (so it will be reused when I manually start a new amdump). After the first time this happened I had a hard time getting it all to work and decided to switch from the firewire 800 bus to the firewire 400 interface which did not seem to affect the overall performance since I backup directly to disk without a holding disk which results in only one dumper/taper writing to the disk so that the speed of my backup is mainly determined by the time that the compression takes. Since then this has happened three or four times with about a week or so in between. Annoying since I physically have to be at the server to reboot it (remote rebooting doesn't even work at that stage) and pull the power of the external disk. I am a bit surprised that amanda is not able to realize that it can not write to the disk anymore and doesn't call the whole thing off, cleanup up processed on clients too. PS I am running 2.5.1p1 on a Mac server with RedHat WS4 clients. René On May 3, 2007, at 10:34 PM, anthonyh wrote: Hi guys, Any solution to the GNUTAR hanging issue yet? I'm having the same problem as the tar problem runs indefinitely for some time until i killed it. top - 10:31:19 up 5 days, 12:01, 1 user, load average: 1.11, 1.03, 1.01 Tasks: 44 total, 2 running, 41 sleeping, 0 stopped, 1 zombie Cpu(s): 30.6% us, 69.4% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si Mem:125064k total, 122244k used, 2820k free,77304k buffers Swap: 262072k total,0k used, 262072k free,14384k cached PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 17812 root 25 0 1920 652 568 R 99.7 0.5 4732:18 tar 4546 root 16 0 2888 868 724 R 0.3 0.7 0:00.03 top 1 root 16 0 1952 460 392 S 0.0 0.4 0:01.09 init # ps aux |grep tar root 17812 99.5 0.5 1920 652 ?RMay01 4732:26 gtar --create -- file - --directory /boot --one-file-system --listed-incremental /usr/local/var/a manda/gnutar-lists/anthonyho.no-ip.org_boot_1.new --sparse --ignore-failed-read --totals . root 4548 0.0 0.4 4820 572 pts/1D10:31 0:00 grep tar Any workaround? Regards, Anthony Chris Cameron wrote: sendbackup: sendbackup: debug 1 pid 28355 ruid 2 euid 2: start at Thu Mar 29 07:15:09 2007 sendbackup: version 2.5.1p3 Could not open conf file /etc/amanda/amanda-client.conf: No such file or directory Could not open conf file /etc/amanda/test/amanda-client.conf: No such file or directory sendbackup: debug 1 pid 28355 ruid 2 euid 2: rename at Thu Mar 29 07:15:09 2007 sendbackup req: GNUTAR /dev/raid0g 0 1970:1:1:0:0:0 OPTIONS |;auth=BSD;compress-fast;encrypt-serv-cust=/opt/amanda/sbin/ amcrypt-ossl;server-decrypt-option=-d;index; parsed request as: program `GNUTAR' disk `/dev/raid0g' device `/dev/raid0g' level 0 since 1970:1:1:0:0:0 options `|;auth=BSD;compress-fast;encrypt-serv-cust=/opt/amanda/sbin/ amcrypt-ossl;server-decrypt-option=-d;index;' sendbackup: start: carp0:/dev/raid0g lev 0 sendbackup: time 0.041: spawning /usr/bin/gzip in pipeline sendbackup: argument list: /usr/bin/gzip --fast sendbackup-gnutar: time 0.046: pid 31136: /usr/bin/gzip --fast sendbackup-gnutar: time 0.064: doing level 0 dump as listed- incremental to '/var/amanda/gnutar-lists/carp0_dev_raid0g_0.new' sendbackup-gnutar: time 0.069: doing level 0 dump from date: 1970-01-01 0:00:00 GMT sendbackup: time 0.072: spawning /usr/local/libexec/runtar in pipeline sendbackup: argument list: runtar test gtar --create --file - --directory /home --one-file-system --listed-incremental /var/amanda/gnutar-lists/carp0_dev_raid0g_0.new --sparse --ignore-failed-read --totals . sendbackup-gnutar: time 0.129: /usr/local/libexec/runtar: pid 14139 sendbackup: time 0.130: started backup sendbackup: time 0.139: started index creator: /usr/local/bin/ gtar -tf - 2/dev/null | sed -e 's/^\.//' sendbackup: -auth=bsd: debug 1 pid 9120 ruid 2 euid 2: start at Thu Mar 29 08:21:53 2007 security_getdriver(name=BSD) returns 0x42af4038 -auth=bsd: version 2.5.1p3 -auth=bsd: build: VERSION=Amanda-2.5.1p3 -auth=bsd:BUILT_DATE=Mon Mar 26 08:10:30 MDT 2007 -auth=bsd:BUILT_MACH=OpenBSD carp1.netthruput.com 4.0 CARP#0 sparc64 -auth=bsd:CC=gcc -auth=bsd:CONFIGURE_COMMAND='./configure' '--prefix=/usr/ local' '--sysconfdir=/etc' '--localstatedir=/var' '--without-server' '--with-user=operator' '--with-group=operator'
Re: Does amrecover automatically use unflushed files on the holding disk ?
Does this also suggest that one could set a 1TB hard drive to be the holding disk and only write to that, i.e., never use flush to 'tape' out the data. If that is possible, and the holding disk can be used as a 'fifo' type storage system, might that not be the ideal HD based backup system as opposed to using chg-disk? I assume the overwriting of older backup data would be the issue, which is why I thought of a fifo type storage. This way one could keep as many tapecycles as the drive could hold and just lose one when more data is backed up. I currently do not use a holding disk and write directly to the external HD. The result of that is that each disklist is only doing a single backup at a time (sequentially). As long as the network is not the bottleneck, this is not an ideal way. Using multiple disk lists and amdump instances could do the trick, but then you get more and more report emails and slots to maintain (when things go wrong). Am I being stupid doing it this way? Does somebody have a better suggestion as to how to back up 11 different boxes to one server? René On Mar 30, 2007, at 5:23 PM, Frank Smith wrote: [EMAIL PROTECTED] wrote: On Fri, Mar 30, 2007 at 03:19:35PM -0400, Guy Dallaire wrote: A quick question. Suppose you run some level 0 backup (GTAR method) with amanda and leav the files on the holding disks until there is enough files to fill a tape and run amflush. Say this takes 5-6 working days. Now, if, after 3 days, I have to restore something that has been dumped on the disk on the first day and run amrecover on the client and setdate -MM-DD (today - 3 days) Are the holding disk files indexed ? Will amanda use them and not ask for a tape when comes the time to extract ? Quick answer: yep! And as an added bonus, its faster than restoring from tape. Frank -- Frank Smith [EMAIL PROTECTED] Sr. Systems Administrator Voice: 512-374-4673 Hoover's Online Fax: 512-374-4501
forcing a failed backup to disk to use the same slot
Hi, I have had some problems that an overnight backup 'hung up', i.e., a level 0 dump started but never properly finished so backups from other machines did not work either. So far the problem has been with the western digital disk on my Mac amanda server, where a restart of the external disk solves the problem. I catch these problems the same day they happen, so I am wondering whether it is possible to run amdump with a configuration of dumping to a disk (using tpchanger chg-disk) and not have amanda use the next slot, but the slot that the failed dump started on. I could not find any information on that in the man pages. Any ideas? Thanks, René
Re: forcing a failed backup to disk to use the same slot
In this case it did start writing one dump, but the slot never got the 'tape end' on it, and I can't even be sure that the one and only dump file on it was even properly closed since I had to reboot the server... So I didn't miss anything in the documentation in that there is no amadmin or amdump argument that allows me to 'reuse' the tape/slot. Would this be a possible feature that I could request for consideration? Rene On Mar 24, 2007, at 4:17 PM, Frank Smith wrote: René Kanters wrote: Hi, I have had some problems that an overnight backup 'hung up', i.e., a level 0 dump started but never properly finished so backups from other machines did not work either. So far the problem has been with the western digital disk on my Mac amanda server, where a restart of the external disk solves the problem. I catch these problems the same day they happen, so I am wondering whether it is possible to run amdump with a configuration of dumping to a disk (using tpchanger chg-disk) and not have amanda use the next slot, but the slot that the failed dump started on. I could not find any information on that in the man pages. Any ideas? If you're sure that nothing you need was written to the 'tape', you can use amrmtape and then relabel it with the same label. That should make it the first tape to use on the next run. I've often wondered why Amanda marks a tape as 'used' on a failed backup even when nothing was written to tape. Frank Thanks, René -- Frank Smith [EMAIL PROTECTED] Sr. Systems Administrator Voice: 512-374-4673 Hoover's Online Fax: 512-374-4501
Re: amanda client on MAC, launchd issue
Hi, Has anybody been able to use the Mac as a backup server and do a restore from that? With the instructions I found online (which included the amandad launchd setup described below) I am able to use the Mac (OS X, not even server) as a backup server, but I can't get the amrestore to work. René On Jan 10, 2007, at 11:52 AM, Alan Pearson wrote: Maybe this will help. It's the one I use here for OS/X server client. I think the key line is inetdCompatability ?xml version=1.0 encoding=UTF-8? !DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//EN http://www.apple.com/DTDs/PropertyList-1.0.dtd; plist version=1.0 dict keyGroupName/key stringbackup/string keyInitGroups/key true/ keyLabel/key stringorg.amanda.amandad/string keyProgramArguments/key array string/usr/libexec/amandad/string /array keySockets/key dict keyListeners/key dict keySockServiceName/key stringamanda/string keySockType/key stringdgram/string /dict /dict keyUserName/key stringamanda/string keyinetdCompatibility/key dict keyWait/key true/ /dict /dict /plist -- AlanP On Wed, January 10, 2007 4:13 pm, McGraw, Robert P. wrote: Not sure if this is relivent to your case but; 1) I am new to Mac OSX so take pity if I say something stupid. 2) We run a Mac OSX server 3) Our Mac OSX server uses Apple's launchd to launch programs. 4) Added a LaunchDaemons .plist restarted launchd and I have been backup up our Mac server from a Solaris 10 host for the last several months and it has been working just fine. I believe launchd is Apple's replacement for xinetd, cron and rc.local files, something like Sun's new service (svc) management. Robert -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gene Heskett Sent: Tuesday, January 09, 2007 3:36 PM To: amanda-users@amanda.org Cc: Brian Cuttler; Chris Knight; Brian Kilpatrick Subject: Re: amanda client on MAC, launchd issue On Tuesday 09 January 2007 11:44, Brian Cuttler wrote: We are running the server on Solaris and a client on MAC OS X but are seening that the launchd process seems to invalidate the service. So far I haven't found the actual line in /var/log/system.log but I see these messages which show the intent, so I believe I'm on track. Jan 6 18:32:09 trel launchd: org.amanda.amandad: 7 more failures without living at least 60 seconds will cause job removal trel:/var/log cssadmin$ uname -a Darwin trel.wadsworth.org 8.8.0 Darwin Kernel Version 8.8.0: Fri Sep 8 17:18:57 PDT 2006; root:xnu-792.12.6.obj~1/RELEASE_PPC Power Macintosh powerpc On the client amandad: build: VERSION=Amanda-2.4.5p1 Should we use xinetd instead ? amanda needs xinetd because the services (generally speaking) are launched on demand rather than launched and sleeping till demand. It could at one time be done by inetd, but that code may have bit rot if it still exists, even in a version as old as 2.4.5p1. I'm currently running Amanda version 2.5.1p2-20070105 here. And this is a wee bit odd because on FC6, I had to install xinetd after the installation was done in order for amanda to continue to work with my existing configuration which I simply copied over from the old drive. I have NDI why xinetd is not now part of the default install, so its possible we may be seeing the first shot in needing to figure out a new way to do it. In fact my curiosity is piqued enough to go ask on the fedora list as the the future status of xinetd. TIA, Brian --- Brian R Cuttler [EMAIL PROTECTED] Computer Systems Support(v) 518 486-1697 Wadsworth Center(f) 518 473-6384 NYS Department of HealthHelp Desk 518 473-0773 -- Cheers, Gene There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2007 by Maurice Eugene Heskett, all rights reserved.
next tape and next new tape info in reports
Hi, I am getting confused (so what's new) about the first three lines a report emailed by amdump sends me. I did several dumps in a row and got these results. These dumps were to tape DAILY-3. The next tape Amanda expects to use is: DAILY-14. The next new tape already labelled is: DAILY-4. These dumps were to tape DAILY-4. The next tape Amanda expects to use is: DAILY-14. The next new tape already labelled is: DAILY-5. These dumps were to tape DAILY-1. The next tape Amanda expects to use is: DAILY-14. The next new tape already labelled is: DAILY-5. Notice that Amanda does not use DIALY-14 (I had labeled and formatted 14 tapes in the beginning and the first dump I did on this went to that tape). Later I changed the tapecycle to 4 (since I am still testing, I decided having to wait 14 days to see how it all pans out is a bit long). The fact that it doesn't use DIALY-5 is fine since the tapecycle is only 4 long and I did label 14 tapes. So is the take-home message that I should just ignore that second line about the next tape Amanda expects to use? René
amreport reports wrong results
Hi, I just tried the amreport command and it looks like it is not providing the proper result. The email it generated was that the results were missing for every client and in the summary the line I kept is was repeated for every client too. I checked and indeed there is no log file in the location it is looking for. There are files of the format log.20061206004500.0, so could this have something to do with the fact that I am using 'usetimestamps' in amanda.conf? *** THE DUMPS DID NOT FINISH PROPERLY! The next tape Amanda expects to use is: a new tape. The next new tape already labelled is: DAILY-1. FAILURE AND STRANGE DUMP SUMMARY: werner.richmond.edu /home/reneRESULTS MISSING [snip] hartree.richmond.edu /home/amanda RESULTS MISSING amreport: ERROR could not open log /usr/local/etc/amanda/daily/ log: No such file or directory STATISTICS: Total Full Incr. Estimate Time (hrs:min)0:00 Run Time (hrs:min) 0:00 Dump Time (hrs:min)0:00 0:00 0:00 Output Size (meg) 0.00.00.0 Original Size (meg) 0.00.00.0 Avg Compressed Size (%) -- -- -- Filesystems Dumped0 0 0 Avg Dump Rate (k/s) -- -- -- Tape Time (hrs:min)0:00 0:00 0:00 Tape Size (meg) 0.00.00.0 Tape Used (%) 0.00.00.0 Filesystems Taped 0 0 0 Chunks Taped 0 0 0 Avg Tp Write Rate (k/s) -- -- -- DUMP SUMMARY: DUMPER STATS TAPER STATS HOSTNAME DISKL ORIG-kB OUT-kB COMP% MMM:SS KB/s MMM:SS KB/s -- - - albert.richm -ome/amanda MISSING --- [snip] werner.richm /home/reneMISSING --- (brought to you by Amanda version 2.5.1p1) I assume that the dump itself worked properly since amdump sent me the email (with similar snipping): These dumps were to tape DAILY-14. The next tape Amanda expects to use is: a new tape. The next new tape already labelled is: DAILY-1. STATISTICS: Total Full Incr. Estimate Time (hrs:min)0:00 Run Time (hrs:min) 0:00 Dump Time (hrs:min)0:00 0:00 0:00 Output Size (meg) 33.3 33.30.0 Original Size (meg) 110.9 110.90.0 Avg Compressed Size (%)30.0 30.0-- Filesystems Dumped 11 11 0 Avg Dump Rate (k/s) 2366.8 2366.8-- Tape Time (hrs:min)0:00 0:00 0:00 Tape Size (meg)33.3 33.30.0 Tape Used (%) 33.3 33.30.0 Filesystems Taped11 11 0 Chunks Taped 0 0 0 Avg Tp Write Rate (k/s) 1738.2 1738.2-- USAGE BY TAPE: Label Time Size %NbNc DAILY-14 0:0034080k 33.311 0 NOTES: planner: Adding new disk werner.richmond.edu:/home/rene. [snip] taper: tape DAILY-14 kb 34080 fm 11 [OK] DUMP SUMMARY: DUMPER STATS TAPER STATS HOSTNAME DISKL ORIG-kB OUT-kB COMP% MMM:SS KB/s MMM:SS KB/s -- - - albert.richm -ome/amanda 0 107702624 24.40:00 5554.3 0:00 5498.7 [snip] werner.richm /home/rene 019701920 97.50:00 9078.8 0:05 367.7 (brought to you by Amanda version 2.5.1p1) Is this a bug in amreport, or does it have to do with some new features (e.g. usetimestamp) whose ramifications is not implemented throughout the suite of programs? Thanks, René
dump cycles and such
Hi, I am trying to set up my test system so that I get on 5 tapes 2 full backups at any time and incremental ones in between them. I thought that I should be able to do that using in my amanda.conf these lines: dumpcycle 4 tapes runspercycle 4 tapecycle 5 usetimestamps true # make sure I can do more than one test a day when I run an amdump on this test (all on the same day, since I am not patient) five times in a row, I end up with no level 0 dumps (since they got overwritten). I assumed that by setting the dumpcycle to 4 tapes, I should always have a full dump every fourth tape and with having a total of 5 tapes, I'd end up with two level 0 dumps on all my tapes together. Am I doing something wrong in the conf file, or is it not possible to test amdump several times on the same day to try to speed up the testing of a longer dump cycle? Thanks, René
Re: dump cycles and such
Hi Jean-Louis, I had hoped that the list of possible suffixes (per http:// wiki.zmanda.com/index.php/Amanda.conf ) could be interpreted to mean that I could tell Amanda to have a dumpcycle of 4 tapes as opposed to days. Since you are telling me that that won't work, can I deduce from that that I can not test out a dumpcycle of 5 days by running amdump five times in a row (since that will be all on the same day)? Could it be considered a great feature if dumpcycle could be given in units of tapes? Thanks, René On Nov 28, 2006, at 2:50 PM, Jean-Louis Martineau wrote: dumcycle is a number of days? you set it to 4 days. The scheduler work in days, not in number of tapes or run. Jean-Louis René Kanters wrote: Hi, I am trying to set up my test system so that I get on 5 tapes 2 full backups at any time and incremental ones in between them. I thought that I should be able to do that using in my amanda.conf these lines: dumpcycle 4 tapes runspercycle 4 tapecycle 5 usetimestamps true # make sure I can do more than one test a day when I run an amdump on this test (all on the same day, since I am not patient) five times in a row, I end up with no level 0 dumps (since they got overwritten). I assumed that by setting the dumpcycle to 4 tapes, I should always have a full dump every fourth tape and with having a total of 5 tapes, I'd end up with two level 0 dumps on all my tapes together. Am I doing something wrong in the conf file, or is it not possible to test amdump several times on the same day to try to speed up the testing of a longer dump cycle? Thanks, René
amreport issues
Hi, I noticed that when I issue the amreport command I get an error in the report that it generates: amreport: ERROR could not open log /usr/local/etc/amanda/test/log: No such file or directory that file does indeed not exist, but there are a lot of log.xx.0 files in the usr/local/etc/amanda/test/ directory. The strange thing is that an amdump test will generate an email message with the (somewhat) expected contents. René
Re: questions regarding incremental backup sizes
Hi Gene, Sorry about not hitting reply-all. I will from now on! So are there instructions regarding the best way to set up amanda to do backups to a disk as opposed to tape? I found the sample for using the test environment with virtual tapes at http://wiki.zmanda.com/index.php/Test_environment_with_virtual_tapes I took out the holdingdisk part of that amanda.conf since I assumed that since I am writing directly (and only) to a disk, that would not be needed, but your comment suggests that that may not be the proper way to think about it. By the way, where can I look at the mailing list archives to try to find my answers there? Cheers, René On Nov 15, 2006, at 8:09 AM, Gene Heskett wrote: On Wednesday 15 November 2006 07:50, René Kanters wrote: I've added the list back, please use 'reply all' when replying to any mailing list. That way, the list archives can be searched with a very high probability of finding an answer to the problem you have because someone else already had it and most likely solved it. Which is true for both problems you have asked about so far. :-) Hi Gene, Thanks, that was indeed it. I was wondering whether you know why I get in my amdump.1 a lot of find diskspace: not enough diskspace. Left with 1952 K driver: find_diskspace: time 0.138: want 1952 K and now that my incremental ones are working I get for an increment backup the similar message driver: find_diskspace: time 0.142: want 64 K find diskspace: not enough diskspace. Left with 64 K while my test tapes are 5MB and only backing up a set of files that is 2MB big. By default, the holding disk area is 100% reserved for incrementals. To use it and its highly recommended to save shoe-shining wear and tear on the drive, you should change the keyword Reserved in your amanda.conf to allow it to be used for fulls too. I use 30% here. This 'holding disk' is a buffer area/directory on one of your hard drives that is used as a scratchpad area, where files being compressed are built up until that particular disklist entry is completed, at which point it is all written to the tape at the tape drives (or the interfaces) full speed. Without such a holding disk area setup and in use, any backup that involves compression will be written as the compressor spits it out, so the drive will be stopped and started many times, trying to recue back to where it wrote the last data each time. This wastes considerable time and multiplies the wear on the drive by quite a bit. -- Cheers, Gene There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2006 by Maurice Eugene Heskett, all rights reserved.
confused and concerned about virtual tape sizes
Hi, I also posted this on the forum, but I am not sure whether that was the right place to go. I don't understand the message mailed to me regarding the amount of disk space taken up by the test backups to files. Actually we plan to run amanda with backup to files anyway, so this is an important thing for me to understand (and avoid running out of disk space). I set up a test environment analogous to the one in the Test_environment_with_virtual_tapes on the wiki, i.e., with the amanda.conf set up using chg-disk and dumpcycle 7 runspercycle 5 tapecycle 5 I backup a small directory (/home/rene at about 2.1MB) and when I run amdump I get messages with (excerpts): STATISTICS: Total Full Incr. first run Output Size (meg) 1.91.90.0 Original Size (meg) 1.91.90.0 2nd-5th Output Size (meg) 1.90.01.9 Original Size (meg) 1.90.01.9 6th Output Size (meg) 1.90.01.9 Original Size (meg) 1.90.01.9 planner: Last full dump of werner.richmond.edu:/home/rene on tape TEST-01 overwritten on this run. The earlier ones only report in how many runs the full dump will be overwritten. So am I interpreting this wrong in that the incremental backup seems to doing a dump of the complete /home/rene. I never touched any of the files between the dumps, so I had expected the incremental backup not do do anything. What worries me even more, since we are planning to backup to an external hard disk, is that if I look at the contents of each of the slots (1-5) they all have in them a file: -rw---1 amanda disk 1966080 Nov 8 11:27 1.werner.richmond.edu._home_rene.1 i.e., the full size of my (compressed) /home/rene So am I not saving any space using an incremental backup? Or what am I missing here? Can anybody please set me straight on this? Thanks, René PS By the way I am running version 2.5.1p1 PPS I just noticed in the amanda.conf man pages that the tapecycle should be larger than the dumpcycle (which is not the way it is in the example conf on the wiki page I referred to) but setting the runs and dumpcycle to 3 does not seem to make a difference.
questions regarding incremental backup sizes
Hi, On my backup with dump/runs/tape cycle settings of /4/4/5, I get for every backup done that the size of the dump is the same size as the full backup. When I check the disk files (since I doing backups only to disk) I do indeed see that all the slots have the same sized backup in them. My amadmin's disklist output is: [EMAIL PROTECTED] amanda]$ amadmin test disklist line 1: host werner.richmond.edu: interface default disk /home/rene: program GNUTAR priority 1 dumpcycle 4 maxdumps 1 maxpromoteday 1 bumpsize 10240 bumpdays 2 bumpmult 1.50 strategy STANDARD estimate CLIENT compress CLIENT FAST comprate 0.50 0.50 encrypt NONE auth BSD kencrypt NO amandad_path X client_username X ssh_keys X holdingdisk AUTO record NO index YES fallback_splitsize 10Mb skip-incr NO skip-full NO spindle -1 and my amdump.1 is as attached. Just in case this has anything to do with it, my gtar version is 1.13.25. Thanks, René amdump.1 Description: Binary data