Amanda user
If I want to change my amanda user (I have a good reason), am I right in thinking all I have to do is change the user in the amanda.conf of every config and change the user in inetd.conf and restart it on the server and every client? Or do I have to recompile amanda with the new user specified? - David Flood Systems Administrator [EMAIL PROTECTED] Tel: +44 (0)1224 262721 The Robert Gordon University School of Computing St. Andrews Street Aberdeen -
Repost: How to use Amanda through bidirectional firewall?
[Reposted from 2 Apr 2002] http://amanda.sourceforge.net/fom-serve/cache/139.html wrote: Amanda from behind a firewall Running an Amanda server from behind a firewall, to clients outside it, can be a bit tricky. Amanda uses quite a few ports for communications. The general sequence is: 1) The server makes a start backup request on port 10080 to the client. 2) The client forks an amandad process, which then attempts to contact the server on a random udp port. 3) The server opens 2 or 3 random TCP sockets back to the client per dumper process. (one for data, one for messages and one for index, if indexing is enabled.) 4) data starts shuffling. The problem with a firewall is step 2. Since most firewalls are set up to allow any outgoing traffic, the others steps usually have no problems. But that random UDP port back in to the server is usually blocked. This causes a symptom of timeout waiting ack in /tmp/amanda/amandad.debug on the client. ... You can also use the connection tracking feature of the new linux 2.4 firewall code. This will eliminate the need to open incoming ports on the firewall. [EMAIL PROTECTED] Hi everyone, I would like to run the stock Red Hat 7.2 Amanda build on machines where both the client and server run a firewall. To do this, i think i'm going to need connection tracking. :-) Is there any documentation available on using netfilter connection tracking with Amanda? I found this thread on the netfilter developers list: http://lists.samba.org/pipermail/netfilter-devel/2001-May/001263.html It's nearly a year old, and there was no resolution at the time. Has anyone got it working? Paul http://paulgear.webhop.net
Re: Unloading tapes when task done ?
Its sorta like when you go to your kar mechanic. The job is not complete till you put all of your tools away, and cleaned your work area for the next job. That would be the mechanics responsibility, and not the cleaning staff that follows in the evening. But at this moment, some 18hours after amanda started, it is still going, and only on the third ( of an estimated 6 tape backup ). So the unload/eject is the least of my current concerns with regard to getting a backup to happen. Right now the most efficient system i have would be to run tar cMf remoteHost:/dev/st0 / directly on the remote system. after each EOT i will run a shell script to change to the next tape. AFTER I DO THIS CORRECTLY, only then will I get a TRUE feel of the time needed to do this phase of the task. /gat John R. Jackson wrote: Maybe they can OPT-OUT of the feature ... It's just as easy for someone to opt-in and do their own tape operations when Amanda is done. Amanda will currently support both camps -- unload it when done vs. leave it alone. Why add more complexity? We already have too many options (and lots of other things on the TODO list). btw, what do u mean each operation. ... I meant amdump and amflush. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: sendbackup with dumplevel 0 takes a while to startup :-{
Problem with this is obtaining a definitive PROOF. you do the amanda process, and you notice that the tape is not spinning, even though you are in the sendbackup phase. You notice that the ethernet switch is not blinking. You can also notice the task(s) that are running. You also notice that the disk controller light is light. You can make some general conclusions, like gee the reason i up'ed the destimate value from 30min to 6hours is due to what is occuring now. I dont know why, or all of the facts, but i do notice thats it's idle. and i do notice transfer timeouts, although i dont know why exactly either - yet. /gat John R. Jackson wrote: according to the tar docs, the --listed-incremental will check on the files listed in the file if the incremental file is not empty. I suppose the listed-incremental file got filled during the sendsize proceedure?! No. It got filled by the previous sendbackup run (e.g. yesterday). And, FYI, for a level 0 (full) dump, the listed incremental file comes from /dev/null, i.e. it is used, but empty. During the sendbackup step, tar is apparently going through and checking the list, as no data is being transmitted while tar is doing this check. I'm not sure exactly how tar does this. You'd have to look at their code or ask them. I'd be surprised if it completely lost it for 30 minutes, though. Is this how its suppose to work for a level 0 dump? ( 4 hours (wall-time) sendsize, ~4hours (wall-time) incremental check, and finally the transfer of the data ( i suppose longer that 4 hours )) ... Probably true. Is this how it will work with other level dumps ( with the exception of the transfer of data step) ? Yes. Except Amanda may also request a third estimate one level higher than the previous one if it might be time to bump (you can control this with the bump* parameters). You might want to consider using an alternate size calculation tool called calcsize. It's not well supported, and does not handle exclusion lists, but does all the estimates at the same time so you might gain quite a bit of time that way. If you want to go this route, let me know and I'll find the postings from a while back about turning it back on. /gat John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
missing estimate ?
Hello, I'm french and so excuse me for my english I've a problem when I make a amdump In the logfile, I can read this error : error result for host frigo disk /home/frigo/amanda: missing estimate I don't understand why it don't work Thank you for your help ! ps: a part of my logfile with the error : GETTING ESTIMATES... taper: pid 27592 executable taper version 2.4.2p2 driver: pid 27591 executable /usr/local/libexec/driver version 2.4.2p2 driver: send-cmd time 0.013 to taper: START-TAPER 20020404 driver: started dumper0 pid 27594 driver: started dumper1 pid 27595 dumper: dgram_bind: socket bound to 0.0.0.0.633 dumper: pid 27594 executable dumper version 2.4.2p2, using port 633 dumper: dgram_bind: socket bound to 0.0.0.0.634 dumper: pid 27595 executable dumper version 2.4.2p2, using port 634 dumper: dgram_bind: socket bound to 0.0.0.0.635 dumper: pid 27596 executable dumper version 2.4.2p2, using port 635 driver: started dumper2 pid 27596 driver: started dumper3 pid 27597 dumper: dgram_bind: socket bound to 0.0.0.0.636 dumper: pid 27597 executable dumper version 2.4.2p2, using port 636 error result for host frigo disk /home/frigo/amanda: missing estimate getting estimates took 0.137 secs FAILED QUEUE: 0: frigo /home/frigo/amanda DONE QUEUE: empty ANALYZING ESTIMATES... planner: FAILED frigo /home/frigo/amanda 0 [missing result for /home/frigo/amanda in frigo response] INITIAL SCHEDULE (size 64): DELAYING DUMPS IF NEEDED, total_size 64, tape length 1662976 mark 0 delay: Total size now 64. PROMOTING DUMPS IF NEEDED, total_lev0 0, balanced_size 0... analysis took 0.000 secs ___ Do You Yahoo!? -- Une adresse yahoo.fr gratuite et en français ! Yahoo! Mail : http://fr.mail.yahoo.com
problems with GNUTAR exclude-lists
I'm trying to configure an amanda server/client to use GNUTAR with exclude lists. GNUTAR works great, but the excludes don't! The basic configuration is like this: define dumptype global { comment Global definitions index yes compress client best #maxdump 2 holdingdisk true record yes } define dumptype bugs { global comment Backup incremental, remoto, comp best holdingdisk yes program GNUTAR priority medium exclude list /usr/local/etc/amanda/exclude.gtar compress client best dumpcycle 5 } And the disklist contains this: bugs /space bugs -1 hme0 in bugs I have the file /usr/local/etc/amanda/exclude.gtar: martin@bugs:~ cat /usr/local/etc/amanda/exclude.gtar ./pruebas/ ./amanda/ martin@bugs:~ But it's not excluding those 2 directories, and the problem is that the dumps are growing to big. What can be wrong? -- Porqué usar una base de datos relacional cualquiera, si podés usar PostgreSQL? - Martín Marqués |[EMAIL PROTECTED] Programador, Administrador, DBA | Centro de Telematica Universidad Nacional del Litoral -
Re: Unloading tapes when task done ?
DLT's are not flying head technology. They are like 9trk, QIC, i think travan, colorado. The heads do not spin ( i had to think about it ), as the heads move up down to change tracks. But the DLT 8000, now, also place the heads at an angle to tape direction, as well as going up and down. I dont even think there is an idler pully, just a tach. I'd like to know if there is 'continual' tension on the tape while it is loaded ( on a dlt )( Like that of a DEC Tape, or 9trk ) but I do not know. But for whatever technology reasons the schemes that tape mechanisms have evolved, they all rely on knowing what 'state' that they are in. When you have a power outage, turn off the drive, lightning, whatever, you may find that the tape left inside the mechanism to be of little use to you. Quantum says dont do that, AND i'd bet that the legal staff of the other drive manufacturers will never certify that you will always recover a tape left inside the mechanism. Gene Heskett wrote: I'd love to see the tapes stored and used at or slightly below 50 degrees F, and 50% relative humidity as the tape is many times less abrasive then. Some TV stations have even gone so far as to store their tapes in a small room adjacent to the control room which is maintained in the 40 degree and 40% range. Everyting lasts longer, a lot longer. But i suppose that if you needed a tape right away, you'd have to wait for the temp rise, otherwise ud get condensation on the kolder tapes. The tape makers themselves recommend it too, and have data to
Re: index files
nfs mounting the filesytem isn't a problem, this is what i was going to do if what i was trying to do with the index server didn't work. the wierd thing about the hang is the only thing that changed was specifying an index-server other then the tape server. the dumps where written to tape, the index's where written but where still viewed as a temp file, they hadn't been moved to the file index. after seeing this i went back and reconfigured, recompiled, installed, and re-ran the dump and everything ran great. wierd behavior, but everything is running great now. thanks John R. Jackson wrote: ... is it possible to write the indexes to another system like this? ... Not currently. It would take a pretty major change to the protocol. It also has a number of error handling issues. Any chance your other machine could NFS export where you want to store the index files to the tape server machine so it could write them that way? I used to do a variation of that. None of this has anything to do with your hang. Specifying the index server at ./configure time only sets a default for amrecover (which can be changed on the command line). It has no other purpose. You probably need to look at the client to find out why the data is not moving. Darin Perusich John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED] -- Darin Perusich Unix Systems Administrator Cognigen Corp. [EMAIL PROTECTED]
Re: sendbackup with dumplevel 0 takes a while to startup :-{
Problem with this is obtaining a definitive PROOF. Welcome to my world. you do the amanda process, and you notice that the tape is not spinning, even though you are in the sendbackup phase. ... It might not be spinning if you have enough holding disk that the image is going there first. However I gather that's not the case with your system, so this means taper is not getting any data from the client. Moving further back in the chain ... You notice that the ethernet switch is not blinking. ... Which matches the above and also says the client is not even trying. So we continue to move on back ... You can also notice the task(s) that are running. ... So it's not that they just died, so now we move forward a bit from the far end ... You also notice that the disk controller light is light. Are you saying the disk is busy? If so, then it's likely tar (or the OS) is grinding away but not generating any data into the pipeline to Amanda (sendbackup), as you've already pretty well guessed. So this is purely a tar (or OS or hardware) issue. If you did exactly the same thing by hand that Amanda is asking tar to do, it would act the same way. What's going on here is totally independent of Amanda. Next would be to run truss or a debugger on tar and find out what it's up to, then talk to those folks. You can make some general conclusions, like gee the reason i up'ed the destimate value from 30min to 6hours is due to what is occuring now. ... You upped that value as a workaround for something very, very odd/bad with the way tar is working on your system. /gat John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Amrestore and tar
So then, without having to extract the entire archive without amrestore, how can I get the / partition from the tape as /boot is found first, matches the diskname (at least in terms of regular expressions), extracts and stops? Ooooh, you're so very close. The disk name is also a regular expression. When you just enter / it matches anything with a / in it. You want to do this: amrestore -p /dev/nst0 host '^/$' Warren John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
help me
how can resolve thas Load tape Configuraciones-001 now Continue? [Y/n]: y EOF, check amidxtaped.debug file. amrecover: short block 0 bytes UNKNOWN file amrecover: Can't read file header extract_list - child returned non-zero status: 1 please i'dont now what to do -- Atentamente Zulisser Zurita Jefe de Sistemas Operativos Running DEBIAN testing GNU/Linux 2.4.12
Re: Amanda with gtar
On 4 Apr 2002 at 2:19pm, [EMAIL PROTECTED] wrote Hi, again. Hey there. How are you? I need a vacation. You? I have a trouble, Don't we all... how can i backup a dir with all subdirs, if i am using gtar? At moment, Amanda only backups files of root dir, and the rest of subdir are not backuped. Amanda runs gtar with the --one-file-system flag. So, you'll need to add a disklist entry for each filesystem you have (listed in the output of 'df -k', or 'cat /etc/mtab', or... -- Joshua haven't had my coffee yet Baker-LePain Department of Biomedical Engineering Duke University
Re: Amanda user
On Thu, 4 Apr 2002 at 12:08pm, David Flood wrote If I want to change my amanda user (I have a good reason), am I right in thinking all I have to do is change the user in the amanda.conf of every config and change the user in inetd.conf and restart it on the server and every client? Or do I have to recompile amanda with the new user specified? I'm pretty sure you'll need to recompile amanda, as the username is built into the binaries. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: problems with GNUTAR exclude-lists
On Thu, 4 Apr 2002 at 7:57am, Martín Marqués wrote martin@bugs:~ cat /usr/local/etc/amanda/exclude.gtar ./pruebas/ ./amanda/ martin@bugs:~ But it's not excluding those 2 directories, and the problem is that the dumps are growing to big. Just a guess (but this is what I use) -- lose the trailing slashes: ./pruebas ./amanda It works for me(TM). -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
RE: Amanda with gtar
On 4 Apr 2002 at 4:52pm, [EMAIL PROTECTED] wrote yes, i know it, but i want to backup a dir and all subdirs that it has, not a file-system. But, it does not backup all subdirs, only a few ones. What does the disklist entry for that client look like, and what is the output of 'df -k' on that client? CCed back to the list so all can see and help. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
amrestore problem
Tried looking in the FAQ first but didn't come up with anything. Anyway I have amanda installed on a Linux Redhat 7.2 box, using version 2.4.3b2-20020308 Backup seem to perform normally, but decided to verify by doing a restore and got the following error: sh-2.05$ /usr/local/sbin/amrestore -p /dev/nrst0 sunshine hda2 | /sbin/restore -ivf - amrestore: could not stat /dev/nrst0 Verify tape and initialize maps Input is from file/pipe /sbin/restore: Tape read error on first record Any ideas here? Brian Davidson 11710 Plaza America Drive Reston, Virginia 20190 703.261.4694 703.261.5086 Fax
Re: Unloading tapes when task done ?
On Thursday 04 April 2002 07:33 am, Uncle George wrote: DLT's are not flying head technology. They are like 9trk, QIC, i think travan, colorado. The heads do not spin ( i had to think about it ), as the heads move up down to change tracks. But the DLT 8000, now, also place the heads at an angle to tape direction, as well as going up and down. I dont even think there is an idler pully, just a tach. I'd like to know if there is 'continual' tension on the tape while it is loaded ( on a dlt )( Like that of a DEC Tape, or 9trk ) but I do not know. I see, thanks. Interesting to see that asimuth changes are also being used to pack tracks tighter in the linear drives too these days. But for whatever technology reasons the schemes that tape mechanisms have evolved, they all rely on knowing what 'state' that they are in. When you have a power outage, turn off the drive, lightning, whatever, you may find that the tape left inside the mechanism to be of little use to you. Quantum says dont do that, AND i'd bet that the legal staff of the other drive manufacturers will never certify that you will always recover a tape left inside the mechanism. Since *you* are the tape changer in this case, I can see why the requested operations would be to your advantage. Gene Heskett wrote: I'd love to see the tapes stored and used at or slightly below 50 degrees F, and 50% relative humidity as the tape is many times less abrasive then. Some TV stations have even gone so far as to store their tapes in a small room adjacent to the control room which is maintained in the 40 degree and 40% range. Everyting lasts longer, a lot longer. But i suppose that if you needed a tape right away, you'd have to wait for the temp rise, otherwise ud get condensation on the kolder tapes. As the control room in this case was handled by yet another duct from the same AC, and therefore pretty dry too, it wasn't a problem. Commercials came out, got loaded, played, and put away with not more than 5 minutes between the time they were brought out, and loaded into the players. With spinning head tech, any dew will tell you right quick as you'll load up a tape, and have 20 feet of it wrapped around the drum as the dew film will make it grab the spinning head. Its messy, and as our newsroom folks recently found, expensive. The locked up head drum proceeded to burn up the servo boards so badly we had to replace them, at $700 a copy, 3 copies, knocking out 3 of the 5 cameras they had. 14 pin tsop chips got so hot they burned halfway thru the epoxy board under them. We have at this point, discussed this well enough that the rest of the readers can make intelligent decisions based on the technology of their individual drives. -- Cheers, Gene AMD K6-III@500mhz 320M Athlon1600XP@1400mhz 512M 98.7+% setiathome rank, not too shabby for a hillbilly
RE: Still having problems with ufsdump returning 3...
John R. Jackson [mailto:[EMAIL PROTECTED]] wrote: You're looking at the wrong end of the pipe. The messages imply the server side shut things down, which broke the pipe and filtered back to the clients as bad news (there was nowhere to shove their data). So what else is in the Amanda mail report? Hi! I've included the report below -- as far as I can tell, there isn't anything that would cause the problem. Are there any particular log files that I should look at to see what happened? Thanks, Ricky --- These dumps were to tape standard12. The next tape Amanda expects to use is: standard13. FAILURE AND STRANGE DUMP SUMMARY: gcrc.mgh.h /dev/dsk/c0t2d0s4 lev 2 FAILED [/usr/sbin/ufsdump returned 3] newton.mgh /dev/dsk/c0t2d0s0 lev 0 FAILED [/usr/sbin/ufsdump returned 3] STATISTICS: Total Full Daily Estimate Time (hrs:min)0:04 Run Time (hrs:min) 0:51 Dump Time (hrs:min)1:37 0:01 1:36 Output Size (meg) 352.5 50.8 301.7 Original Size (meg) 994.7 50.8 943.9 Avg Compressed Size (%)31.2--31.2 (level:#disks ...) Filesystems Dumped 40 2 38 (1:33 2:4 3:1) Avg Dump Rate (k/s)62.0 948.3 53.6 Tape Time (hrs:min)0:06 0:01 0:06 Tape Size (meg) 353.7 50.9 302.8 Tape Used (%) 3.10.42.6 (level:#disks ...) Filesystems Taped40 2 38 (1:33 2:4 3:1) Avg Tp Write Rate (k/s) 941.4 1087.6 920.6 FAILED AND STRANGE DUMP DETAILS: /-- gcrc.mgh.h /dev/dsk/c0t2d0s4 lev 2 FAILED [/usr/sbin/ufsdump returned 3] sendbackup: start [gcrc.mgh.harvard.edu:/dev/dsk/c0t2d0s4 level 2] sendbackup: info BACKUP=/usr/sbin/ufsdump sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... - sendbackup: info end | DUMP: Writing 32 Kilobyte records | DUMP: Date of this level 2 dump: Tue Apr 02 19:14:48 2002 | DUMP: Date of last level 1 dump: Tue Mar 26 19:09:43 2002 | DUMP: Dumping /dev/rdsk/c0t2d0s4 (gcrc.mgh.harvard.edu:/crc) to standard output. | DUMP: Mapping (Pass I) [regular files] | DUMP: Mapping (Pass II) [directories] | DUMP: Mapping (Pass II) [directories] | DUMP: Mapping (Pass II) [directories] | DUMP: Estimated 1199200 blocks (585.55MB) on 0.01 tapes. | DUMP: Dumping (Pass III) [directories] | DUMP: Dumping (Pass IV) [regular files] | DUMP: 22.16% done, finished in 0:35 | DUMP: 47.81% done, finished in 0:22 | DUMP: 81.22% done, finished in 0:06 ? sendbackup: index tee cannot write [Broken pipe] | DUMP: Broken pipe | DUMP: The ENTIRE dump is aborted. ? index returned 1 sendbackup: error [/usr/sbin/ufsdump returned 3] \ /-- newton.mgh /dev/dsk/c0t2d0s0 lev 0 FAILED [/usr/sbin/ufsdump returned 3] sendbackup: start [newton.mgh.harvard.edu:/dev/dsk/c0t2d0s0 level 0] sendbackup: info BACKUP=/usr/sbin/ufsdump sendbackup: info RECOVER_CMD=/usr/sbin/ufsrestore -f... - sendbackup: info end | DUMP: Writing 32 Kilobyte records | DUMP: Date of this level 0 dump: Tue Apr 02 19:04:29 2002 | DUMP: Date of last level 0 dump: the epoch | DUMP: Dumping /dev/rdsk/c0t2d0s0 (newton.mgh.harvard.edu:/homes/msru1) to standard output. | DUMP: Mapping (Pass I) [regular files] | DUMP: Mapping (Pass II) [directories] | DUMP: Estimated 2466188 blocks (1204.19MB) on 0.02 tapes. | DUMP: Dumping (Pass III) [directories] | DUMP: Dumping (Pass IV) [regular files] | DUMP: 14.09% done, finished in 1:01 | DUMP: 22.09% done, finished in 1:11 | DUMP: 29.37% done, finished in 1:12 | DUMP: 37.85% done, finished in 1:06 ? sendbackup: index tee cannot write [Broken pipe] | DUMP: Broken pipe | DUMP: The ENTIRE dump is aborted. ? index returned 1 sendbackup: error [/usr/sbin/ufsdump returned 3] \ NOTES: planner: Incremental of hedwig.mgh.harvard.edu:/home/msa bumped to level 2. planner: Incremental of hedwig.mgh.harvard.edu:/var bumped to level 3. planner: Full dump of newton.mgh.harvard.edu:/dev/dsk/c0t2d0s0 promoted from 13 days ahead. planner: Full dump of newton.mgh.harvard.edu:/dev/dsk/c0t0d0s0 promoted from 13 days ahead. planner: Full dump of einstein.mgh.harvard.edu:ad0s1a promoted from 13 days ahead. taper: tape standard12 kb 362208 fm 40 [OK] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - einstein.mgh ad0s1a 0 32160 32160 --0:162003.7 0:301088.7 einstein.mgh ad0s1e 11024352 34.4 0:06 63.4 0:001822.8 einstein.mgh ad0s1f 1 334 32 9.6 0:04 8.9 0:001632.7 einstein.mgh ad0s1g 1 38 32 84.2
Re: amrestore problem
On Thu, 4 Apr 2002 at 9:47am, Davidson, Brian wrote Backup seem to perform normally, but decided to verify by doing a restore and got the following error: Verifying backups?! Are you insane? You're not supposed to do that until you *need* them. Sheesh. sh-2.05$ /usr/local/sbin/amrestore -p /dev/nrst0 sunshine hda2 | /sbin/restore -ivf - amrestore: could not stat /dev/nrst0 You said RedHat, but that ain't a Linux device. You want /dev/nst0. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amrestore problem
Hi, use /dev/nst0 as tape-device and you should be fine Christoph Davidson, Brian schrieb: Tried looking in the FAQ first but didn't come up with anything. Anyway I have amanda installed on a Linux Redhat 7.2 box, using version 2.4.3b2-20020308 Backup seem to perform normally, but decided to verify by doing a restore and got the following error: sh-2.05$ /usr/local/sbin/amrestore -p /dev/nrst0 sunshine hda2 | /sbin/restore -ivf - amrestore: could not stat /dev/nrst0 Verify tape and initialize maps Input is from file/pipe /sbin/restore: Tape read error on first record Any ideas here? Brian Davidson 11710 Plaza America Drive Reston, Virginia 20190 703.261.4694 703.261.5086 Fax
RV: DailySet1 AMANDA MAIL REPORT FOR April 3, 2002
i send you my last report from amanda. -Mensaje original- De: Usuario para AMANDA backup [mailto:[EMAIL PROTECTED]] Enviado el: miércoles, 03 de abril de 2002 23:50 Para: [EMAIL PROTECTED] Asunto: DailySet1 AMANDA MAIL REPORT FOR April 3, 2002 These dumps were to tape DGQ976. The next tape Amanda expects to use is: DGQ977. STATISTICS: Total Full Daily Estimate Time (hrs:min)0:00 Run Time (hrs:min) 0:05 Dump Time (hrs:min)0:01 0:01 0:00 Output Size (meg) 214.6 200.8 13.8 Original Size (meg) 214.6 200.8 13.8 Avg Compressed Size (%) -- -- --(level:#disks ...) Filesystems Dumped6 2 4 (1:4) Avg Dump Rate (k/s) 3398.0 4227.8 882.0 Tape Time (hrs:min)0:01 0:01 0:00 Tape Size (meg) 214.8 200.8 13.9 Tape Used (%) 0.60.50.1 (level:#disks ...) Filesystems Taped 6 2 4 (1:4) Avg Tp Write Rate (k/s) 4540.0 5771.4 1114.2 NOTES: planner: Incremental of sc01us0103:/global/datos2/esri-web bumped to level 2. planner: Full dump of sc01us0103:/var promoted from 5 days ahead. planner: Full dump of sc01us0103:/global/datos2/esri-web promoted from 5 days ahead. taper: tape DGQ976 kb 219936 fm 6 [OK] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s --- - sc01us0103 / 14288 4288 --0:05 783.4 0:031557.7 sc01us0103 /global/datos2/esri-web 0 128864 128864 --0:373503.1 0:196662.3 sc01us0103 /var0 76736 76736 --0:126477.4 0:164712.9 sc01us0104 / 14128 4128 --0:06 716.4 0:031511.7 sc01us0104 /var12400 2400 --0:03 826.3 0:02 977.2 sc01us0105 / 13328 3328 --0:021752.9 0:05 700.4 (brought to you by Amanda version 2.4.2p2)
RE: Amanda with gtar
On 4 Apr 2002 at 5:02pm, [EMAIL PROTECTED] wrote But i want to backup all /global/datos2/esri-web/, so i put this in my disklist ls -l of /globar/datos2/esri-web/ is drwxr-xr-x 2 arcims esri 512 Dec 27 12:11 axl drwxr-xr-x 2 arcims esri 512 Oct 31 13:10 conf drwxrwxrwx 5 arcims esri 512 Mar 14 10:18 datos drwxr-xr-x 2 arcims esri 59392 Apr 4 13:19 output drwxrwxr-x 36 arcims esri1024 Apr 2 07:59 website -rw-r--r-- 1 arcims esri 5685491 Feb 7 11:19 xalan-j_2_2-bin.tar.gz drwxr-xr-x 5 arcims esri 512 Feb 7 11:19 xalan-j_2_2_0 Well, when i run amrecover from this machine, and set disk to /global/datos2/esri-web/ and run ls command it shows this: xalan-j_2_2_0/ Hmm. What does your dumptype look like? what does 'gtar --version' on the client say? What about a sample sendbackup*debug (for that disklist entry) from the client? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: RV: DailySet1 AMANDA MAIL REPORT FOR April 3, 2002
On 4 Apr 2002 at 5:36pm, [EMAIL PROTECTED] wrote sc01us0103 /global/datos2/esri-web 0 128864 128864 --0:373503.1 0:196662.3 How does this compare to the amount of data actually in that directory? 'du -sk /global/datos2/esri-web' -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
how to handle holidays?
Hello list I was just wondering... I make a full backup every day, 5 times a week for 5 workdays. What should I do when there's a holiday and I can't switch the tape? My entire tape sequenced would be fscked up... Any solutions here? thnx! Kind regards -- Tom -- Tom Van de Wiele System Administrator Eduline Colonel Bourgstraat 105a 1140 Brussel http://www.eduline.be
Re: amrestore problem
Joshua Baker-LePain [EMAIL PROTECTED] writes: Verifying backups?! Are you insane? You're not supposed to do that until you *need* them. Sheesh. Is that so? What's amverify doing in my crontab file, then? ;-) (That was a rhetorical question. It's there so that ejecting the tape will be quicker, which means I get out of the cold, noisy machine room faster. It hasn't found any errors yet, other than those end-of-tape errors you get when the tape's too short to hold everything.) Coming to think of it, if amverify were to find a real error, what's the approperiate procedure for me to follow? -- Arvid
RE: RV: DailySet1 AMANDA MAIL REPORT FOR April 3, 2002
On 4 Apr 2002 at 6:16pm, [EMAIL PROTECTED] wrote a question? in the server i using tar 1-13_19 but in the cliente tar 1-13, may be this? Yes. tar 1.13 generates bad index files. The data *should* be fine, but your indexes are bad. Upgrade the client immediately, force a level 0, and all should be better. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amrestore problem
On Thu, 4 Apr 2002 at 6:07pm, Arvid Grøtting wrote Coming to think of it, if amverify were to find a real error, what's the approperiate procedure for me to follow? It depends on the error. If the tape is bad, amrmtape it. Amanda will invalidate the backups on that tape, and catch up on the next night. If a particular filesystem or client only checks as bad, try to find out why. After you fixed 'em, force level 0s. But those are just my off the top of my head suggestions. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amrestore problem
Ok, you mean you get the same error with /dev/nst0 or do you say with /dev/nst0 it restores Ok? your message is a bit short to know what you want to say... ;-) Christoph PS: please cc to the list also, so the other guy's out there see whats going on... Davidson, Brian schrieb: It's /dev/nst0. I had just check my BSDI box which uses nrst0 Thanks! -Original Message- From: Christoph Scheeder [mailto:[EMAIL PROTECTED]] Sent: Thursday, April 04, 2002 10:34 AM To: Davidson, Brian Cc: [EMAIL PROTECTED] Subject: Re: amrestore problem Hi, use /dev/nst0 as tape-device and you should be fine Christoph Davidson, Brian schrieb: Tried looking in the FAQ first but didn't come up with anything. Anyway I have amanda installed on a Linux Redhat 7.2 box, using version 2.4.3b2-20020308 Backup seem to perform normally, but decided to verify by doing a restore and got the following error: sh-2.05$ /usr/local/sbin/amrestore -p /dev/nrst0 sunshine hda2 | /sbin/restore -ivf - amrestore: could not stat /dev/nrst0 Verify tape and initialize maps Input is from file/pipe /sbin/restore: Tape read error on first record Any ideas here? Brian Davidson 11710 Plaza America Drive Reston, Virginia 20190 703.261.4694 703.261.5086 Fax
Re: Still having problems with ufsdump returning 3...
... I've included the report below -- as far as I can tell, there isn't anything that would cause the problem. ... Agreed. Are there any particular log files that I should look at to see what happened? First, look for any core files in /tmp/amanda. Then read through the amdump.NN file that corresponds to this run. It's in the logdir directory (amgetconf config logdir). Ricky John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: how to handle holidays?
On Thu, 4 Apr 2002 at 6:18pm, Tom Van de Wiele wrote I was just wondering... I make a full backup every day, 5 times a week for 5 workdays. What should I do when there's a holiday and I can't switch the tape? My entire tape sequenced would be fscked up... I think most people just let amanda run onto holding disk, amflush when they get back, and go merrily along. Yes, this may mean that your monday tape is no longer used on monday, but that's why I use numbers and let amanda tell me which tape it wants. Also, one tape error will throw off your tape-day correlation anyway. Just my $0.02. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: A few interesting problems (tapeless, interactive restores, etc)
Hi John, I am looking at amanda-2.4.3b3, but aside from docs/VTAPE-API I don't see any mention of how the virtual tapes can be set up with AMANDA. And the VTAPE-API looks to me, as a non-coder, like possibly more than I need to know (though if i need to hack some C code I can). This sounds like it would be much easier for me, as having a virtual tape name to ask of ADSM is much easier than trying to figure out which tapes I need on my own, and restoring a dump disk for the purpose. Can you point me to any documentation, crude or otherwise, describing this process? If anyone else has gotten this to work, what does it involve? If I need to change tapeio.c, use special amanda.conf directives, etc etc etc (nothing is mentioned in sample files). I appreciate any assistance. Greg On Wed, 2002-04-03 at 22:39, John R. Jackson wrote: First off, for reasons beyond the scope of this email, my AMANDA server has no tape device. I am sending all backups to the holding disk. We also have a large IBM tape library running ADSM that I would like to incorporate into my AMANDA backup strategy. ... Sounds like a perfect setup for the tapeio code in 2.4.3 (now in beta test, but the tapeio stuff has been stable for a long time). You would set up a disk area to emulate a tape, then use the chg-multi tape changer to move the data back and forth with your ADSM system. The current (2.4.3 beta) chg-multi has a posteject hook to a script you provide that could move the date into ADSM. It would be easy to add a preload step to do the other direction (don't know why I didn't think of doing that when I did posteject). A) I need a way of asking AMANDA, If I want foo file from 3/20/2000 on machine bar, which dump 'directories' will I need? ... The tapeio code would take care of this. Amanda would tell you you needed tapes A, B and C, which would map directly to your ADSM areas. If you don't go with tapeio, then I think amadmin config find ... is what you'll want. You may also need to be a bit sneaky about the data motion to ADSM and leave the names and holding disk directories behind, but truncate the files to zero length, or, worst case, truncate them to just their 32 KByte header. I'm pretty sure that would fool amrecover sufficiently into using them (obviously, you'd have to reload the rest of the data before turning the restore completely loose). Greg Mohney John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED] -- Greg Mohney Technical Specialist, UNIX Administration APAC Customer Services (319) 896-5027
Re: A few interesting problems (tapeless, interactive restores, etc)
On 4 Apr 2002 at 11:04am, Greg Mohney wrote Can you point me to any documentation, crude or otherwise, describing this process? If anyone else has gotten this to work, what does it involve? If I need to change tapeio.c, use special amanda.conf directives, etc etc etc (nothing is mentioned in sample files). I appreciate any assistance. Documentation on how to use tapeio is in the amanda(8) man page shipped with 2.4.3bN. Look for stuff about specifying file: as your tapedev. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: how to handle holidays?
On Thu, 4 Apr 2002, Joshua Baker-LePain wrote: On Thu, 4 Apr 2002 at 6:18pm, Tom Van de Wiele wrote I was just wondering... I make a full backup every day, 5 times a week for 5 workdays. What should I do when there's a holiday and I can't switch the tape? My entire tape sequenced would be fscked up... I think most people just let amanda run onto holding disk, amflush when they get back, and go merrily along. Or use a tape library. -Mitch
Mysql backup
Does anyone know of a tryly slick way to back up mysql databases. Right now I am trying to write a script which read-only locks the databases then runs amanda and then unlocks the data bases. I am having fun doing it but I see little reason to re-invent the wheel if someone else has already done this. I also could be going about this in an entirly wrong headed manner. All I know is I will not have the diskspace to hold an entire mysql hotcopy on disk, so I am trying to find a way to have amanda back up the database files directly. The ultimate solution would be to have amanda go through and lock the databases individualy as it works but that I fear is close to impossible. Oh well. thanks in advance for any help -john
Re: Mysql backup
On Thu, 4 Apr 2002, John Rosendahl wrote: Does anyone know of a tryly slick way to back up mysql databases. Right now I am trying to write a script which read-only locks the databases then runs amanda and then unlocks the data bases. I am having fun doing it but I see little reason to re-invent the wheel if someone else has already done this. I also could be going about this in an entirly wrong headed manner. All I know is I will not have the diskspace to hold an entire mysql hotcopy on disk, so I am trying to find a way to have amanda back up the database files directly. The ultimate solution would be to have amanda go through and lock the databases individualy as it works but that I fear is close to impossible. Oh well. thanks in advance for any help -john I think I found this somewhere in the mysql notes, etc: more /usr/local/sbin/dump_mysql #!/bin/sh # Quick little script to get stable copies of the database. /usr/local/bin/mysqlhotcopy -u root -p --allowold your_table_names directory_to_dump_to You could either do it like that and subsequently compress them, but I'm sure Amanda would take care of that anyway. YMMV -- ~ Doug Silver Network Manager Quantified Systems, Inc ~
Question???Application Watch
Hello Everyone, This is not amanda related but, i really need to find a solution for this. I want to monitor every machine on the network. Whenever someone installs a software i want to be e-mailed about it. Do you know any software that can do this?? Thanks in advance... Kaan
Re: Mysql backup
We are using a perl script that connects to remote machines and creates a dump to the localmachine. A shell script that uses this perl script is added to the crontab and run nightly. You can then use amanda to backup these dumps. I've attached these scripts. Just edit the shell to your liking and create the folder you want the dumps saved. Oh, and the shell script is setup to delete any dumps older than 60 days. Hope this helps. It's simple and reliable. -- Brian On Thu, 2002-04-04 at 11:36, John Rosendahl wrote: Does anyone know of a tryly slick way to back up mysql databases. Right now I am trying to write a script which read-only locks the databases then runs amanda and then unlocks the data bases. I am having fun doing it but I see little reason to re-invent the wheel if someone else has already done this. I also could be going about this in an entirly wrong headed manner. All I know is I will not have the diskspace to hold an entire mysql hotcopy on disk, so I am trying to find a way to have amanda back up the database files directly. The ultimate solution would be to have amanda go through and lock the databases individualy as it works but that I fear is close to impossible. Oh well. thanks in advance for any help -john #!/usr/bin/perl use Getopt::Long; my ( $userid, $password, $host, $database, $help, $outdir ); GetOptions(u=s = \$userid, p=s = \$password, host=s = \$host, db=s = \$database, h = \$help, dir = \$outdir, , \extraArgs); # Make sure we have all the inputs we need if ( $help || !$userid || !$password || !$database ) { printUsage(); } if ( !$host ) { $host = localhost; } # find out where mysqldump command lives chomp ( $dumpcmd = `which mysqldump` ); unless ( $dumpcmd ) { $dumpcmd = /usr/local/mysql/bin/mysqldump; unless ( -e $dumpcmd ) { die ERROR: Couldn't find mysqldump command\n; } } # build the output file name $outfile = $database . _at_ . $host . _on_ . time . .sql; # Create a directory to put the output in unless one was specified unless ( $outdir ) { $home = $ENV{HOME}; $outdir = $home/db; } unless ( -e $outdir -d $outdir ) { umask 0; mkdir $outdir, 0777; } $output = $outdir . / . $outfile; # Log what we are doing print $dumpcmd -c -u $userid -p$password -h $host $database $output\n; # Do the dump system($dumpcmd -c -u $userid -p$password -h $host $database $output); # Compress it $gzip = `which gzip`; chomp $gzip; unless ( $gzip ) { $gzip = /bin/gzip; unless ( -e $gzip ) { die ERROR: Couldn't find gzip command\n; } } system($gzip $output); # printUsage sub printUsage { print Usage: $0 -h=username -p=password -db=database [-host=hostname] [-dir=output_directory] -h # help (This text) -u=username# user login for accessing database -p=password# user password for accessing database -host=hostname|ip address # Server the database is running on -db=database name # name of db to backup -dir=output directory # where to store output files \n; exit 0; } # # Subroutine: extra_args # # Purpose: To print an error message and kill the program # if unexpected command line arguments are found # sub extraArgs { my ($bad_arg) = _; print Invalid argument [$bad_arg] passed to $0\n; print_usage(); } #!/bin/bash echo Removing files older than 60 days.. /usr/bin/find /home/AccountToUse/db -mtime +60 -print -exec rm -f {} \; /home/AccountToUse/backupMySQL.pl -u=databaseaccount -p=password -host=hostnameorip -db=DBNAME
bind_portrtange: Permission denied
Two days ago, backups for one of the directories on my samba server started failing. Nothing was changed in the amanda configuration and this problem only affects the one directory. Total size of the directory is about 7.5 G. Unfortunately this is a shared drive exported to windows users via samba and suffers 'unintentional' deletions about twice a month. I've included the relevant parts of the e-mail and sendsize, sendbackup and runtar debug files. The only problem I can see : sendbackup: bind_portrange: port 909: Permission denied but this only happens on this one drive. Anyone know why? From email: /-- thames /export/common lev 0 FAILED [data timeout] sendbackup: start [thames:/export/common level 0] sendbackup: info BACKUP=/bin/gtar sendbackup: info RECOVER_CMD=/bin/gtar -f... - sendbackup: info end \ From sendsize.debug: sendsize: calculating for amname '/export/common', dirname '/export/common' sendsize: getting size via gnutar for /export/common level 0 sendsize: spawning /usr/local/libexec/runtar in pipeline sendsize: argument list: /bin/gtar --create --file /dev/null --directory /export/common --one-file-system --listed-incremental /usr/local/var/amanda/gnutar-lists/thames_export_common_0.new --sparse --ignore-failed-read --totals --exclude-from /export/common/.exclude . Total bytes written: 7442524160 (6.9GB, 253MB/s) . sendsize: getting size via gnutar for /export/common level 1 sendsize: spawning /usr/local/libexec/runtar in pipeline sendsize: argument list: /bin/gtar --create --file /dev/null --directory /export/common --one-file-system --listed-incremental /usr/local/var/amanda/gnutar-lists/thames_export_common_1.new --sparse --ignore-failed-read --totals --exclude-from /export/common/.exclude . Total bytes written: 130897920 (125MB, 18MB/s) From sendbackup.debug sendbackup: debug 1 pid 7739 ruid 250 euid 250 start time Thu Apr 4 06:04:02 2002 /usr/local/libexec/sendbackup: version 2.4.3b2 sendbackup: got input request: GNUTAR /export/common 0 1970:1:1:0:0:0 OPTIONS |;bsd-auth;index;exclude-list=.exclude; parsed request as: program `GNUTAR' disk `/export/common' lev 0 since 1970:1:1:0:0:0 opt `|;bsd-auth;index;exclude-list=.exclude;' sendbackup: try_socksize: send buffer size is 65536 sendbackup: bind_portrange: port 909: Permission denied sendbackup: stream_server: waiting for connection: 0.0.0.0.57536 sendbackup: bind_portrange: port 909: Permission denied sendbackup: stream_server: waiting for connection: 0.0.0.0.57537 sendbackup: bind_portrange: port 909: Permission denied sendbackup: stream_server: waiting for connection: 0.0.0.0.57538 waiting for connect on 57536, then 57537, then 57538 sendbackup: stream_accept: connection from 192.168.124.25.39935 sendbackup: stream_accept: connection from 192.168.124.25.39936 sendbackup: stream_accept: connection from 192.168.124.25.39937 got all connections sendbackup-gnutar: doing level 0 dump as listed-incremental to /usr/local/var/amanda/gnutar-lists/thames_export_common_0.new sendbackup-gnutar: doing level 0 dump from date: 1970-01-01 0:00:00 GMT sendbackup: spawning /usr/local/libexec/runtar in pipeline sendbackup: argument list: gtar --create --file - --directory /export/common --one-file-system --listed-incremental /usr/local/var/amanda/gnutar-lists/thames_export_common_0.new --sparse --ignore-failed-read --totals --exclude-from /export/common/.exclude . sendbackup: started index creator: /bin/gtar -tf - 2/dev/null | sed -e 's/^\.//' sendbackup-gnutar: /usr/local/libexec/runtar: pid 7742 From runtar.debug: runtar: debug 1 pid 7742 ruid 250 euid 0 start time Thu Apr 4 06:04:02 2002 gtar: version 2.4.3b2 running: /bin/gtar: gtar --create --file - --directory /export/common --one-file-system --listed-incremental /usr/local/var/amanda/gnutar-lists/thames_export_common_0.new --sparse --ignore-failed-read --totals --exclude-from /export/common/.exclude . -- -- Stephen Carville UNIX and Network Administrator DPSI (formerly Ace USA Flood Services) 310-342-3602 [EMAIL PROTECTED]
dumpdates causing amdump to fail.
I am wanting to dump /etc which contains the dumpdates file. Is there a way to ignore or exclude in a non-compression dump this file in etc. The error I recieve is /-- linus /etc lev 0 FAILED [/sbin/dump returned 1] sendbackup: start [linus:/etc level 0] sendbackup: info BACKUP=/sbin/dump sendbackup: info RECOVER_CMD=/sbin/restore -f... - sendbackup: info end | DUMP: You can't update the dumpdates file when dumping a subdirectory | DUMP: The ENTIRE dump is aborted. sendbackup: error [/sbin/dump returned 1] \ Any suggestions would be greatly appreciated. Thanks, Doug Johnson Systems Administrator Vifan USA, Inc. 1 Vifan Drive Morristown, TN 37814 423-581-6990 x207
Re: Mysql backup
I've always just used the mysqldump program which outputs all of the commands to create the tables and data. Output this to a file and you have a backup of your database. The MySQL reference manual has a good section on how to use this (Section 15.7 in my documentation). Steve _ Steve Cousins Email: [EMAIL PROTECTED] Research AssociatePhone: (207) 581-4302 Ocean Modeling Group School of Marine Sciences 208 Libby Hall University of Maine Orono, Maine 04469 On Thu, 4 Apr 2002, John Rosendahl wrote: Does anyone know of a tryly slick way to back up mysql databases. Right now I am trying to write a script which read-only locks the databases then runs amanda and then unlocks the data bases. I am having fun doing it but I see little reason to re-invent the wheel if someone else has already done this. I also could be going about this in an entirly wrong headed manner. All I know is I will not have the diskspace to hold an entire mysql hotcopy on disk, so I am trying to find a way to have amanda back up the database files directly. The ultimate solution would be to have amanda go through and lock the databases individualy as it works but that I fear is close to impossible. Oh well. thanks in advance for any help -john
RE: Still having problems with ufsdump returning 3...
John R. Jackson [mailto:[EMAIL PROTECTED]] wrote: Are there any particular log files that I should look at to see what happened? First, look for any core files in /tmp/amanda. None found... Then read through the amdump.NN file that corresponds to this run. It's in the logdir directory (amgetconf config logdir). The only lines in amdump.2 which seem to apply (other than the planning lines) are: driver: send-cmd time 876.106 to dumper1: FILE-DUMP 01-00058 /usr/local/amanda/holdingdisk//20020402/gcrc.mgh.harvard.edu._dev_dsk_c0t2d0s4. 2 gcrc.mgh.harvard.edu /dev/dsk/c0t2d0s4 2 2002:3:27:0:8:23 1048576 DUMP 64416 |;bsd-auth;srvcomp-fast;index; and driver: result time 2731.933 from dumper1: FAILED 01-00058 [/usr/sbin/ufsdump returned 3] Which still leaves me mystified... Ricky
Re: dumpdates causing amdump to fail.
On Thu, 4 Apr 2002 at 2:05pm, Doug Johnson wrote | DUMP: You can't update the dumpdates file when dumping a subdirectory RTEM. :) You're trying to back up a subdirectory, not a filesystem. dump won't let you do this. Either back up filesystems only or use tar. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
RE: dumpdates causing amdump to fail.
Thanks Joshua. R'ing every manual I can find I was trying to come up with a solution to a problem that I am having. When I choose the root file system (/) called sda5 and put it in the disklist as follows: schroeder sda5 nocomp-root I receive the following error when I do an amcheck: ERROR: schroeder: [could not access sda5 (sda5): No such file or directory] I though that maybe it didn't like the fact that it was a / filesystem so I was trying to just use the directories that I specify. If I exclude that filesystem everything works ok. Whats up with /? And what page did you find that information on? : Doug -Original Message- From: Joshua Baker-LePain [mailto:[EMAIL PROTECTED]] Sent: Thursday, April 04, 2002 2:50 PM To: Doug Johnson Cc: [EMAIL PROTECTED] Subject: Re: dumpdates causing amdump to fail. On Thu, 4 Apr 2002 at 2:05pm, Doug Johnson wrote | DUMP: You can't update the dumpdates file when dumping a subdirectory RTEM. :) You're trying to back up a subdirectory, not a filesystem. dump won't let you do this. Either back up filesystems only or use tar. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
RE: dumpdates causing amdump to fail.
On Thu, 4 Apr 2002 at 3:23pm, Doug Johnson wrote Thanks Joshua. R'ing every manual I can find I was trying to come up with a solution to a problem that I am having. When I choose the root file system (/) called sda5 and put it in the disklist as follows: schroeder sda5 nocomp-root I receive the following error when I do an amcheck: ERROR: schroeder: [could not access sda5 (sda5): No such file or directory] I though that maybe it didn't like the fact that it was a / filesystem so I was trying to just use the directories that I specify. If I exclude that filesystem everything works ok. Whats up with /? And what page did you find that information on? : 2.4.2p2 has some issues with mapping device names to directories on Linux. There's a patch for this at http://www.amanda.org/patches.html. Note that you may want to consider using directory names rather than devices. It comes in handy if you move stuff around. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: I know this message... help restore entire system THANKS
Thanks for all the info. Of course the solution of restoring to the second drive bfore the 1st one fails is simply ingenius. BTW, what is kickstart config file? Could just point me in the right direction. It's just that this machine is an old pentiom 100 box. It does not do much, except firewall, router, DNS, and other crucial functions, so I would like to be able to restore it as quickly as possible. Thanks again for all the help. Igor. --- Lewis Watson [EMAIL PROTECTED] wrote: - Original Message - From: John R. Jackson [EMAIL PROTECTED] To: Mr Igor Vertiletsky [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, April 03, 2002 6:52 PM Subject: Re: I know this message... help restore entire system ... the system is configured as amanda client and has only one partition / (root). ... Anyway, my question is: how do I restore the entire system (in case I need to rplace or rebuild hard drive)? I have recent full dumps of the server. Amanda does not provide from bare metal restore features. If it crashes hard you'll need to get the client up enough (e.g. from your original OS distribution materials) to get on the network and then you can pull back the dump images and blat over the top of things. The kickstart config file works wonders when rebuilding. That will get the bare metal install then use Amanda to restore the data. Works for me but I would like to hear from others on this. hth, Lewis Watson __ Do You Yahoo!? Yahoo! Tax Center - online filing with TurboTax http://taxes.yahoo.com/
Amanda and Firewalls.
Hi Amanda Users, I have a question, I would like to use Amanda on servers that we have outside our firewall. Is there anyway, to get Amanda to work without opening ports on the firewall? Such as using SSH or some other way, or will Amanda work with only using ports open through the firewall? I have read the FAQs on the Amanda site, and I don't understand why there has to be a range of UDP ports open. Would it work with just using one UDP port, instead of opening a range of UDP ports? Maybe someone could explain how and why, and which ports I should open on the firewall. Thanks! Ward.
Re: I know this message... help restore entire system THANKS
On redhat 7.2 it should be in /root named anaconda-ks.cfg I think you need to copy this over to a floppy and rename it ks.cfg. When you place the rh 7.2 install disc in and it comes up with the install menu you type in linux: something-something and it will make an install just like you had on the machine where the kickstart file came from. I will try to find the something something line, take a look at www.linuxdoc.org hth, Lewis Watson - Original Message - From: Mr Igor Vertiletsky [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, April 04, 2002 3:19 PM Subject: Re: I know this message... help restore entire system THANKS Thanks for all the info. Of course the solution of restoring to the second drive bfore the 1st one fails is simply ingenius. BTW, what is kickstart config file? Could just point me in the right direction. It's just that this machine is an old pentiom 100 box. It does not do much, except firewall, router, DNS, and other crucial functions, so I would like to be able to restore it as quickly as possible. Thanks again for all the help. Igor. --- Lewis Watson [EMAIL PROTECTED] wrote: - Original Message - From: John R. Jackson [EMAIL PROTECTED] To: Mr Igor Vertiletsky [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, April 03, 2002 6:52 PM Subject: Re: I know this message... help restore entire system ... the system is configured as amanda client and has only one partition / (root). ... Anyway, my question is: how do I restore the entire system (in case I need to rplace or rebuild hard drive)? I have recent full dumps of the server. Amanda does not provide from bare metal restore features. If it crashes hard you'll need to get the client up enough (e.g. from your original OS distribution materials) to get on the network and then you can pull back the dump images and blat over the top of things. The kickstart config file works wonders when rebuilding. That will get the bare metal install then use Amanda to restore the data. Works for me but I would like to hear from others on this. hth, Lewis Watson __ Do You Yahoo!? Yahoo! Tax Center - online filing with TurboTax http://taxes.yahoo.com/
Re: Amanda and Firewalls.
On Thu, 4 Apr 2002, Ward Violanti wrote: Hi Amanda Users, I have a question, I would like to use Amanda on servers that we have outside our firewall. Is there anyway, to get Amanda to work without opening ports on the firewall? Such as using SSH or some other way, or will Amanda work with only using ports open through the firewall? I have read the FAQs on the Amanda site, and I don't understand why there has to be a range of UDP ports open. Would it work with just using one UDP port, instead of opening a range of UDP ports? Maybe someone could explain how and why, and which ports I should open on the firewall. Thanks! Ward. While I'm sure JJ and other lurking can confirm the true details, I believe that the clients start sending back udp packets to the server like so (from one of my client sendbackup files): sendbackup: stream_server: waiting for connection: 0.0.0.0.729 sendbackup: stream_server: waiting for connection: 0.0.0.0.730 sendbackup: stream_server: waiting for connection: 0.0.0.0.731 waiting for connect on 729, then 730, then 731 sendbackup: stream_accept: connection from firewall.719 sendbackup: stream_accept: connection from firewall.720 sendbackup: stream_accept: connection from firewall.721 got all connections Since ssh is a tcp connection, I don't see anyway to have Amanda use that as the transport device because of how it was designed to use udp for speed/etc. If you compile Amanda to restrict the tcp/udp portranges, it won't open up anything on your firewall that the public can see, it's more that your firewall is configured to pass such connections on to the clients and vice-versa. -- ~ Doug Silver Network Manager Quantified Systems, Inc ~
Problem with index files
hi, I have a problem with my index files. Each run, a gzip'd index file is created. Each run, this file is about 20 bytes long. I doubt gzip is compressing the information on 20+GB of files into 20 bytes... :-( Looking at the sendbackup.*.log files, I can see where the index program is invoked. I don't see anything to indicate _why_ the index files are so small. The amanda.conf has a global config with index yes which is used with all the dumptypes I have. amadmin reports v2.4.2p2. I'm running a RH7.2 Linux box, x86. GNU tar, gzip are in the right places. GNU gzip is v1.3 --- is that the right version? GNU tar is v1.13.19; what else should I look for/at to diagnose this? I'd like to know how to restore index files from tape, if anyone knows how to do that... Any help appreciated. -Adam -- Adam Lins www.bdti.com Engineering Manager Ph:(510) 665-1600 Berkeley Design Technology, Inc Fx:(510) 665-1680 Berkeley, CA 94704