Mac OS X installation of Amanda
I have about 30 Mac OS X (10.3, 10.4, 10.5) computers/servers I am installing Amanda on but I am finding that the process of installation (for which I've scripted so there's not a whole lot I have to do), will take up to three hours on a slower machine because of the need to install Xcode, MacPorts, and Amanda. The biggest time consumer happens to be the installation of Xcode and Macports.While I am sure there are some libraries needed, has anyone created (or have some tips on) a completely static, fully bundled libraries version of amanda which could be installed within 20-30 minutes rather than 2-3 hours? - Dan Brown monkeypa...@shaw.ca
Dumps cannot be retrieved from tape...
Hi, I've got a backup set for a bunch of Mac OS X users who are backed up on a daily basis via Samba shares. Backups are first backed up to holding disk, then dumped to tape once I've got enough to fill an LTO3 tape. The other day I was asked to restore a file which had ended up going on to tape (just a day or two ago). I came to realize that all backups which have been flushed to tape, are no longer accessible via amrecover. Backups which are still on the holding disk can be accessed. This appears to be a samba share backup specific problem as a near identical setup with a number of Linux servers (although using the server client) does not have this problem. There was one samba share backup in that profile up until a month or so ago but I've never had to restore anything for it. I know it's there because the index exists for it but the index-tape association doesn't appear to have occured. So I've got two questions: 1. How do I fix my config (I assume it's a config problem) so that files can be retrieved from tape for this profile (otherwise I might as well set my tape device to be /dev/null)? 2. How do I retrieve my file off of tape not using amrecover? - Dan Brown [EMAIL PROTECTED]
Re: Dumps cannot be retrieved from tape...
Thanks Joshua, Jean-Louis, amrestore doesn't have the sort of granularity I'd like but the shotgun approach works as well as a pinpoint approach. I was hoping I wouldn't have to restore 178GB of data for a 10MB file. Dan Brown [EMAIL PROTECTED]
Re: Dumps cannot be retrieved from tape...
. amrecover_changer /dev/null # amrecover will use the changer if you restore # from this device. # It could be a string like 'changer' and # amrecover will use your changer if you # set your tape with 'settape changer' holdingdisk hd1 { comment main Mac holding disk directory /dumps/holding-mac # where the holding disk is use -100 Mb # how much space can we use on it # a non-positive value means: # use all space but that value chunksize 2Gb # size of chunk if you want big dump to be } reserve 30 # percent autoflush no # infofile /etc/amanda/Macs/curinfo # database DIRECTORY logdir /etc/amanda/Macs # log directory indexdir /etc/amanda/Macs/index # index directory tapelist /etc/amanda/Macs/tapelist# list of used tapes define tapetype LTO3-400 { comment Dell PV124T LTO3 (hardware compression off) length 402432 mbytes filemark 0 kbytes speed 71189 kps } define tapetype LTO2-200 { comment Dell PV124T LTO3 (degraded LTO2 tapes) length 201216 mbytes filemark 0 kbytes speed 71189 kps } define dumptype global { comment Global definitions index yes estimate client holdingdisk required record yes fallback_splitsize 64m auth bsdtcp maxdumps 3 } define dumptype user-tar { global program GNUTAR comment user partitions dumped with tar priority high compress none index } define interface local { comment a local disk use 1000 kbps } define interface le0 { comment 10 Mbps ethernet use 400 kbps } define interface eth0 { comment 1Gbps ethernet use 921600 kbps } Dan Brown [EMAIL PROTECTED]
Pausing amflush?
Is there any way to pause an amflush? Our backup system first dumps to a holding disk before it is flushed to tape. Usually a single dump will not cover either an LTO2 or LTO3 tape (two different backup profiles btw) unless there are a fair number of level 0 dumps involved. One of these holding disks is via a Gb connection on an NFS server. The NFS server doesn't do a whole lot else so it doesn't cause too major of headaches although some direct attached storage would certainly work better. Anyways I accidentally pulled the plug on the NFS server the other day while I was doing some cable management and wasted the majority of an LTO2 tape (16GB of 200GB) before the server came back up. Amanda ran along happily for a while, not getting any data, but I assume polling the NFS share for the data it wanted until it timed out. So is there a way to pause an amflush? If not, where do I make a feature request? I realize it's not an entirely useful feature for every day use but it could be useful under certain circumstances. --- Dan Brown
DLE Regexp?
I have a couple of disk list entries for a samba client which seem to be causing the disks to be backed up with a full dump nearly every time. I am suspecting it's a regular expression problem since the DLEs are almost identical. The DLEs look like so: # Design Resources Mac moi//coralie/design_resources_archive user-tar moi//coralie/design_resources user-tar It seems to be randomly picking which to do a full dump on, based on (I'm guessing), that the share /design_resources is also a regex match for /design_resources_archive. As an example. Last friday it did a full dump of both shares. //coralie/design_resources_archive 198317MBlvl0 //coralie/design_resources 45851MBlvl0 On Monday: //coralie/design_resources_archive 71558MBlvl1 //coralie/design_resources 0MBlvl1 Yesterday: //coralie/design_resources_archive 20419MBlvl1 //coralie/design_resources 0MBlvl1 The archive is changed maybe a couple of times per year, so there shouldn't be any big changes like this So I guess I have two questions on this. Is there an easy way to fix this? Or can I simply add a trailing / at the end of all of the samba entries? Amanda then warns me that new info and index dirs will be created for all entries. I assume if I do that, a new level 0 dump will be done on all of these dirs. I imagine I can move all of the files to the new format if needed however. --- [EMAIL PROTECTED]
Re: amanda not using smbclient for samba clients?
Marc Muehlfeld wrote: Dan Brown schrieb: # disklist # Design Resources Mac coralie //coralie/design_resources_archive/ tar-comp-srvbest-ne coralie //coralie/design_resources/ tar-comp-srvbest-ne The first column is the name of the machine who connect to the samba share, not the client. Here's one of my DLEs for example: nucleus.mr.lfmg.de//amplicon/backup$ SMB_Low_Client-Fast Explanation: Nucleus does the connect to my client Amplicon and collect the data. The machine must not be the backupserver itself. E. g. if you have a remote subnet, connected with a slow connection, you can configure a remote linux machine to collect and compact your data and then transfer it from there to your backup server to save bandwidth. It's wasting bandwidth of the WAN connection to transfer the whole samba data to your backup server and do the compact there. Surely you can do this in your local subnet too, to keep the load of your backupserver low. It can't be the backup server itself or shouldn't? I was following the 15 minute setup example on zmanda.com and it used the backup server itself. Isn't that the point of having --with-smbclient in the compile script? It doesn't really matter how loaded down the backup server is, as long as the computers with the shares I am backing up don't experience a huge load. Out of curiosity I've changed it to the backup server itself: ministryofinformation //coralie/design_resources_archive/ tar-comp-srvbest-ne but now on the target computer I see no connection attempts at all via tcpdump. These computers are all on the same subnet and are qualified addresses of an internal domain (although it's FQDN would be ministryofinformation.thezoo). --- Dan Brown
amanda not using smbclient for samba clients?
/null # the raw device to be used (ftape only) maxdumpsize -1 # Maximum number of bytes the planner tapetype LTO3-400 # what kind of tape it is (see tapetypes below) labelstr ^Macs-[0-9][0-9][0-9]*$ # label constraint regex: all tapes must match amrecover_do_fsf yes# amrecover will call amrestore with the # -f flag for faster positioning of the # tape. amrecover_check_label yes # amrecover will call amrestore with the # -l flag to check the label. amrecover_changer /dev/null # amrecover will use the changer if you holdingdisk hd1 { comment main Mac holding disk (1.3TB) directory /dumps/holding-mac # where the holding disk is use -100 Mb # how much space can we use on it chunksize 2Gb } reserve 30 # percent autoflush no # infofile /etc/amanda/Macs/curinfo # database DIRECTORY logdir /etc/amanda/Macs # log directory indexdir /etc/amanda/Macs/index # index directory tapelist /etc/amanda/Macs/tapelist# list of used tapes define tapetype LTO3-400 { comment Dell PV124T LTO3 length 402432 mbytes filemark 0 kbytes speed 71189 kps } define dumptype global { comment Global definitions index yes estimate client holdingdisk required record yes fallback_splitsize 64m auth bsdtcp maxdumps 2 } define dumptype tar-comp-srvbest-ne { comment gnutar with best server compression program GNUTAR compress server best estimate server index } define interface local { comment a local disk use 1000 kbps } define interface eth0 { comment 1Gbps ethernet use 921600 kbps } --- Dan Brown
Changing the from address in reports
I would like my Amanda backup reports to mail out to a non-local address. Currently when I get them, they come from [EMAIL PROTECTED] so they don't even have the hostname of the backup server associated with it. I've used Amanda for quite a while to backup a number of servers and am currently using 2.5.2p1 but up until recently I just had reports delivered to the local amanda user. [EMAIL PROTECTED] is getting caught up by our spam filter for obvious reasons. Is there somewhere within the amanda config I can set this or is this something I have to tell the mail system to forcefully add a domain/hostname onto for outgoing mail? --- Dan Brown [EMAIL PROTECTED]
Re: Changing the from address in reports
Chris Hoogendyk wrote: Dustin J. Mitchell wrote: On 11/10/07, Dan Brown [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] is getting caught up by our spam filter for obvious reasons. Is there somewhere within the amanda config I can set this or is this something I have to tell the mail system to forcefully add a domain/hostname onto for outgoing mail? At present, no, there is no provision for this in the Amanda configuration. However, it would be a fairly straightforward piece of functionality to add to server-src/reporter.c, with a corresponding new configuration option in common-src/conffile.[ch]. Dan, do you want to give it a shot? Anyone else? On the other hand, what the mail comes from is all controlled by your mail configuration and host files on the backup server. I have a separate mail server that is the only server that can receive mail. My other servers are configured to not be able to receive mail, and to relay mail through the mail server. The mail server, in turn, whitelists my other servers. I'm on Solaris 9, and the Sun guide to the secure configuration of Solaris 9 has a section on configuring Sendmail when you're not a mail server. It uses m4, so you aren't just dipping into sendmail.cf. While the idea of doing a custom patch on Amanda to force it to send from backupuser@my.hostname.com. I don't think I've touched C in about 7 years so it would probably take a week or two to get it just right. :P This backup server is brand new, along with seven other servers I've setup and configured in the last two months, and I came to realize a little after sending my original message that I missed putting my usual MTA on here (exim) and had the default MTA (sendmail). So I've now got it working as I'd like. Now for my next questions. :P --- Dan Brown [EMAIL PROTECTED]
Disappearing Dumps...
Mb # minimum savings (threshold) to bump level 1 - 2 bumpdays 1 # minimum days at each level bumpmult 4 # threshold = bumpsize * bumpmult^(level-1) etimeout 300# number of seconds per filesystem for estimates. dtimeout 1800 # number of idle seconds before a dump is aborted. ctimeout 30 # maximum number of seconds that amcheck waits # for each client host tapebufs 256 runtapes 1 # number of tapes to be used in a single run of amdump tapedev /dev/nst0 # the no-rewind tape device to be used rawtapedev /dev/null # the raw device to be used (ftape only) maxdumpsize -1 # Maximum number of bytes the planner will schedule tapetype DLT4 # what kind of tape it is (see tapetypes below) labelstr ^Full[0-9][0-9]*$# label constraint regex: all tapes must match amrecover_do_fsf yes# amrecover will call amrestore with the amrecover_check_label yes # amrecover will call amrestore with the amrecover_changer /dev/null # amrecover will use the changer if you restore # holding disks holdingdisk hd1 { comment Main Holding Disk - 150GB directory /var/amanda/dumps # where the holding disk is use -1Gb# how much space can we use on it chunksize 0 # size of chunk if you want big dump to be } holdingdisk hd2 { comment Secondary Holding Disk - 50GB directory /var/amanda/dumps2 use -1Gb chunksize 0 } reserve 30 # percent autoflush no # infofile /var/amanda/Full/curinfo # database DIRECTORY logdir /var/amanda/Full # log directory indexdir /var/amanda/Full/index # index directory #tapelist /var/amanda/Full/tapelist # list of used tapes # tapelist is stored, by default, in the directory that contains amanda.conf # tapetypes define tapetype DLT4 { comment DLT4 tape drives length 40960 mbytes # 40 Gig tapes filemark 4000 kbytes# I don't know what this means speed 2814 kbytes # 3.5 Mb/s } # dumptypes define dumptype global { comment Global definitions } define dumptype high-tar-comp { comment partitions dumped with tar, compressed with gzip priority high compress client best index yes exclude list /etc/amanda/exclude.gtar } define dumptype comp-high-full { global comment High Compression Dump compress client best dumpcycle 0 index yes priority high } define interface local { comment a local disk use 1000 kbps } define interface eth0 { comment 100 Mbps ethernet use 1000 kbps } -- disklist -- # Separation of other servers into multiple disks #blackhawk /clients / { # high-tar-comp # include ./clients # } 1 #blackhawk /notclients / { # high-tar-comp # exclude ./clients ./tmp # } 1 #gimp /clients / { ## all client directories #high-tar-comp #include ./clients #} 1 #gimp /mysql / { ## the mysql databases #high-tar-comp #include ./var/lib/mysql #} 1 #gimp /home / { ## home directories #high-tar-comp #include ./home #} 1 #gimp /incoming / { ## the incoming directory #high-tar-comp #include ./zu/incoming #} 1 #gimp /everythingelse / { ## whatever isn't included in the above four definitions #high-tar-comp #include ./* #exclude ./clients ./tmp ./zu/incoming ./home ./var/lib/mysql #} 1 #arahk /everything / { ## the entire 15GB disk of the server arahk # high-tar-comp # include ./* # } 1 oldgimp /clients / { # all client directories high-tar-comp include ./clients } 1 oldgimp /var/lib/mysql / { # mysqld databases high-tar-comp # include ./var/lib/mysql } 2 oldgimp /zu/incoming / { # the old zu/incoming high-tar-comp # include ./zu/incoming } 1 oldgimp /home / { # home directories high-tar-comp include ./home } 1 oldgimp /everythingelse / { # whatever isn't included in the above four definitions high-tar-comp include ./* exclude ./clients ./tmp ./zu/incoming ./home ./var/lib/mysql } 1 Oh yeah, the identical definitions which work on gimp, don't work on oldgimp... --- Dan Brown [EMAIL PROTECTED]
Re: Disappearing Dumps...
Whoops, this was meant to go to the entire list. Gene Heskett wrote: On Monday 19 December 2005 17:53, Dan Brown wrote: I have recently just upgraded amanda an amanda installation from 2.4.4p1 and upgraded the backup host from RedHat 7.3 to Fedora FC4. I've also conglomerated the backed up hosts from several configurations which ran independantly of each other, into a single configuration which amanda will manage itself (hopefully well). Other than the move of all hosts from their respective configurations into one, not much else has changed. There is a host however in this new config whose backups magically disappear after the first chunk. This host is a temporary copy of the old backup host, something which will sit in the backup rotation for a month or two to ensure we don't need any files off of the machine before it's completely wiped out. Strangely enough, level 0 backups don't want to be dumped to the holding disks either. This is the mailout from amanda after the backup attempt (I've removed all other hosts in order to debug this). The holding disks are 150GB and 50GB in size respectively, both more than enough to hold any one part of the split up disks. Ahh, but what is the value of the reserved keyword in your holdiingdisk definition section of your amanda.conf? By default, all holdingdisk space is reserved for incremental backups, which prevents it from being used for level 0's. I have around 25GB free there, with a reserved setting of -500m, which means it can use all but the last 500 megs for level 0's. Actually, I had initially forgotten to uncomment that a couple of days ago and gotten the dumps rejected right away. Right now, both the 150GB and the 50GB disks are set to be filled up to capacity -1Gb. I also have only 30% of the disks (or is this the entire holding disk set?) reserved for only incremental dumps. Do I have to set this in each holding disk definition when I have more than one holding disk? --- Dan Brown [EMAIL PROTECTED]
TapeTypes definition affecting wear?
The system we have setup at work have all backups, regardless of whether or not the tape drive is free, sent to a network holding disk before being flushed to tape to avoid the problem of I just used a 40GB tape to backup 2GB of data too bad I can't append new data to the tape. The only device between the holding disk machine and the server the tape drive is attached to is a switch, and between the high capacity ethernet link and the fine tuning to amanda's parameters to avoid the shoe shining behavior of the tape drive, it works pretty well. That's just some background info but not the question I am looking for an answer to here. I just spent the last two hours manually unreeling half a kilometer of broken DLT-IV tape off of a DLT1 drive and was about to throw a new tape back into the drive (after resetting it's feeder/tape puller, etc) to resume backup flushes when it occured to me that I rarely ever saw a flush report where there were X filesystems flushed, with no errors. Typically, in the notes section of the flush (or backup) report, there is a line saying: taper: tape gimpFull02 kb 35794048 fm 14 writing file: No space left on device And in the list of dump definitions (in this case, a partition) it will list the partition as failed. gimp sda50 N/A 0 --N/A N/A FAILED So, will amanda always try to stuff as much as possible onto a tape until it reaches the end of the tape or will it actually estimate whether it can fit the next backed up partition/directory definition onto the tape? If I used ammt or mt to wind to the end of the tape and read a few blocks, would I see data from the failed disk being backed up? If the tape drive actually reaches the end of the tape before it stops it would explain why it appears the spool of tape I pulled off was from the very end of the reel and there appears to be no tape left in the cartridge. My tapetype definition looks like this: define tapetype DLT4 { comment DLTtapeIV on a Quantum DLT1 drive length 42949 mbytes # 40 Gig tapes filemark 4000 kbytes# I don't know what this means speed 2814 kbytes # 2.5 Mb/s - nice and easy! :P } So if I define the tape to be larger than it really is (maybe I should define it as a 35GB tape instead?), will amanda attempt to yank the end of the tape out of the cartridge every time or is there enough force to normally do so anyways? The drive is a Quantum DLT1 stuck inside a Lacie enclosure, according to Quantum specs the tapes should be considered as 40GB. Do tapetype definitions affect this sort of thing at all or should I just consider this the premature death (a little over a year old) of a tape? --- Dan Brown [EMAIL PROTECTED] [EMAIL PROTECTED]
Re: DLT1 Tape drive performance...
Paul Bijnens wrote: Dan Brown wrote: I adjusted the tapebufs setting from 20 to 128, and the DLT drive now writes for around 10 seconds before stopping and seeking. I've adjusted it to 256 to see what happens once I need to flush the holding disk again. I think because of the fact the holding disk is mounted via NFS I may increase this to 512 (16MB) or higher to see what sort of threshold it has. Make is as large as possible (if you have the RAM available). I'm interested in the results of changing tapebufs too. It certainly helps but indeed the big bottleneck from the NFS drive is either network speed or IDE performance. My guess is probably network performance. I decided to waste a backup tape (by not fully using it) and discovered with the changed settings that at most the drive will now write for 30 seconds, rewind a bit, then sit idle for around 5 seconds while the buffers fill. I'll have to see if I some more performance out of the network another way.
Re: DLT1 Tape drive performance...
Paul Bijnens wrote: Dan Brown wrote: During a backup, or a flush, the tape drive writes data for 4 seconds, then rewinds for 1 second, then writes for 4 seconds, then rewinds for 1 second, etc. This seems like a good way to wear out a drive. That's usually a symptom of a too fast tapedrive connected to a too slow server. But before you trow out the server, verify that your configuration does indeed use the holdingdisk! Not only does that lower the lifetime of the drive, it's also immensly slow. Amanda does quite a good job to keep the tape streaming, using two processes with a shared memory bufferpool (one that fills the bufferpool from the holdingdisk file, the other one that writes the bufferpool to tape). This may be a problem then as the IDE holding disk is NFS mounted from a third machine. The server with the backup is a SCSI only system and doesn't support IDE. It was worth neither the cost of an expensive SCSI drive (+$700CDN / 80GB?!) nor even a $60IDE card for that matter. When we get our next server I'll convince the people with the money to move to SATA based servers. A tape library would be nice eventually too. Will adjusting the tapetype help at all to speed up or slow down the drive or is that only a rated write speed like CDR/RW (the CDs, not the burners) are rated to maximum speeds before data corruption starts to occur? I adjusted the tapebufs setting from 20 to 128, and the DLT drive now writes for around 10 seconds before stopping and seeking. I've adjusted it to 256 to see what happens once I need to flush the holding disk again. I think because of the fact the holding disk is mounted via NFS I may increase this to 512 (16MB) or higher to see what sort of threshold it has. Other possibilities I may try include connecting the holding disk server and the tape server directly via ethernet (they both have a spare connection at the moment) to cut out any network lag issues at our switches (and odd hub). Dan [EMAIL PROTECTED]
DLT1 Tape drive performance...
I've never really thought about this before until I had our DLT drive sitting beside our desk during some renovations. What should the average DLT1 tape drive perform like during backup? Ours happens to be a Lacie DLT1 (a Quantum DLT1 in a LaCie enclosure basically) which is used to backup approximately 120GB of data twice a week off of our two servers (~60GB each). It is attached to one of the servers. During a backup, or a flush, the tape drive writes data for 4 seconds, then rewinds for 1 second, then writes for 4 seconds, then rewinds for 1 second, etc. This seems like a good way to wear out a drive. I also have a DDS backup drive which backs up a number of Mac machines around the office using Retrospect. This machine writes data in a single stream, and only ever seems to pause when it switches to the next machine, verifies data, or seeks specific data on a tape. Now this is a bit of a comparison of apples and oranges but should a DLT1 be able to write a near continuous backup stream rather than this apparent write 4-seek 1, write 4-seek 1 cycle during backups and flushes? Any suggestions on optimizing settings? Here are my drive configs (minus stuff about logs, users, etc): inparallel 4 dumporder BTBT netusage 9600 Kbps dumpcycle 4 weeks runspercycle 20 tapecycle 5 tapes dtimeout 1800 ctimeout 30 tapebufs 20 runtapes 1 tapedev /dev/nst0 rawtapedev /dev/null holdingdisk hd1 { comment main holding disk directory /var/amanda/dumps use -1 Gb chunksize 0 } reserve 45 define tapetype DLT1 { comment DLT1 tape drives length 40960 mbytes # 40 Gig tapes filemark 4000 kbytes# I don't know what this means speed 2814 kbytes # 3.5 Mb/s } define dumptype high-tar-comp { root-tar comment partitions dumped with tar, compressed with gzip priority high compress client best index yes exclude list /etc/amanda/exclude.gtar } define interface local { comment a local disk use 1000 kbps } define interface eth0 { comment 100 Mbps ethernet use 1000 kbps } My disklist file: blackhawk /clients / { high-tar-comp include ./clients } 1 blackhawk /notclients / { high-tar-comp exclude ./clients ./tmp } 1
Resetting the chunksize?
How do I find out what the INT_MAX or MAX_FILE_SIZE is on my system? Previously my amanda backup setup was dumping 40GB files onto a Samba share on a Windows 2000 workstation NTFS partition. Recently amanda has been dumping backups into 2GB chunks even though the configuration still reads to use the size of INT_MAX/MAX_FILE_SIZE of the system. I have tried to set the chunksize to 40GB (the size of the DLT4 tapes) but the chunks are still only 2GB each. Is it possible that recent changes to the samba configuration could have caused this?
Samba mounted holding disks....
The holding disk on our backup server died a couple of weeks ago and we have yet to get the replacement for it installed so for the last couple of weeks we've been dumping the backups which would normally go to the holding disk to a windows partition on a workstation (one of a couple with enough diskspace to do this). This appears to be running normally without problems except that amcheck always reports a problem which Should be fixed before run, if possible. The problem is that amcheck will report either 0 KB free, or some negative value because the partition which the samba share is mounted to has less than the amount of space to be left on that partition. When amdump runs however, no problems occur. Is there any way to get rid of these messages until we get the new drive installed? (which we 'll leave until the share is full enough to amflush) Amanda Tape Server Host Check - WARNING: holding disk /var/amanda/dumps: only 0 KB free, using nothing ERROR: /dev/nst0: rewinding tape: No medium found (expecting a new tape) NOTE: skipping tape-writable test Server check took 30.352 seconds Amanda Backup Client Hosts Check Client check: 1 host checked in 0.320 seconds, 0 problems found (brought to you by Amanda 2.4.4p1)