Re: ["data write: File too large"]
On Tuesday 15 January 2002 09:56 am, Rivera, Edwin wrote: >is there a way, in the amanda.conf file, to specify *NOT* to use > the holding-disk for a particular filesystem? > >for example, if i use amanda to backup 8 filesystems on one box > and i want 7 to use the holding-disk, but one not to.. is that > possible? > >just curious.. Diwn toward the end of amanda.conf is just such a 'dumptype', edit to suit your circumstances. [...] -- Cheers, Gene AMD K6-III@500mhz 320M Athlon1600XP@1400mhz 512M 98.3+% setiathome rank, not too shabby for a hillbilly
Newbie - Cycles and dumps
We're in the process of replacing our BRU (inflexible) and Arkeia (buggy) with Amanda. Everything's gone well with installation et al, but I'm confused about dumpcycles, etc. Specifically, I have; - 3 tapes per day (about 150Gb to be backed up). Each day's backup is a 'full' backup i.e. no differential or incremental - 5 days per week (Monday - Friday) - 5 weeks worth of tapes (75 in all) to be cycled. My initial thoughts were simply dumpcycle 35 # 5 'real' weeks runspercycle 25 tapecycle 75 # nospares! But this will involve incremental/differential which I don't want. So, I tried; dumpcycle 0 tapecycle 75 This worked, but Amanda is now happy to overwrite any tape in the set other than the most recently used (i.e. yesterday's). I want error messages if any of the tapes are used before their 5 week validity is up. So, I'm now thinking; dumpcycle 35 runspercycle 25 tapecycle 75 and adding strategy noinc to my disklist. Will this force Amanda to do full backups each run, and also preserve the tapes for the full dumpcycle? I'm also unsure about how to slot an archive into the mix. I take quarterly archives, but how does one do this with Amanda? Setting up a separate archive backup set is obvious, but is there an elegant way to slot this into the run? The only way I can see is to massage the crontab, to take the 'Daily' run out in favour of the 'Quarterly' archive when the time comes. Pretty clunky, but if that's the way to do it, it's a small price to pay. Thanks in advance Bryan Tonnet [EMAIL PROTECTED]
Re: SUMMARY: Re: onstream adr50 problems
On Tuesday 15 January 2002 10:30 am, Moritz Both wrote: >I have got an email from OnStream support today. They say that > they are *not* planning to release a new firmware for the ADR50 > or the ADR30 at the moment. Thus, no support for amanda. > >They claim that their new generation drives, the ADR2 series, do > not have this problem. Typical... They've sold what they're gonna sell of this model, now push the "new and improved" one. &^#$#marketroids. Tell ya what, I'd make sure the sales on the next one were a bust too, if only to send a message comparing their support with a 10-33rd tor vaccum. -- Cheers, Gene AMD K6-III@500mhz 320M Athlon1600XP@1400mhz 512M 98.3+% setiathome rank, not too shabby for a hillbilly
RE: ["data write: File too large"]
is there a way, in the amanda.conf file, to specify *NOT* to use the holding-disk for a particular filesystem? for example, if i use amanda to backup 8 filesystems on one box and i want 7 to use the holding-disk, but one not to.. is that possible? just curious.. -edwin -Original Message- From: Adrian Reyer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 9:24 AM To: Pedro Aguayo Cc: Rivera, Edwin; [EMAIL PROTECTED] Subject: Re: ["data write: File too large"] On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > But, I think Edwin doesn't have this problem, meaning he says he doesn't > have a file larger than 2gb. I had none, either, but the filesystem was dumped into a file as a whole, leading to a huge file, same with tar. The problem only occurs as holding-disk is used. Continuously writing a stream of unlimited size to a tape is no problem, but as soon as you try to do this onto a filesytem, you run in whatever limits you have, mostly 2GB-limits on a single file. No holding-disk -> no big file -> no problem. (well, tape might have to stop more often because of interruption in data-flow) Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
RE: ["data write: File too large"]
This makes sense because when I ran my initial Amanda dump on that host, I had no holding-disk defined, and it did backup the filesystem at level 0, and that filesystem has over 24GB of data on it, albeit, they are all small .c files and the such. I am left wondering then how chunksize fits into the equation. It was my understanding that this is what the chunksize was for. well, i just started anohter amdump right now without the holding-disk in the amanda.conf file. let's see what happens. -edwin -Original Message- From: Adrian Reyer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 9:24 AM To: Pedro Aguayo Cc: Rivera, Edwin; [EMAIL PROTECTED] Subject: Re: ["data write: File too large"] On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > But, I think Edwin doesn't have this problem, meaning he says he doesn't > have a file larger than 2gb. I had none, either, but the filesystem was dumped into a file as a whole, leading to a huge file, same with tar. The problem only occurs as holding-disk is used. Continuously writing a stream of unlimited size to a tape is no problem, but as soon as you try to do this onto a filesytem, you run in whatever limits you have, mostly 2GB-limits on a single file. No holding-disk -> no big file -> no problem. (well, tape might have to stop more often because of interruption in data-flow) Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
Re: Amanda and firewall
Here's how I did this. The relevant portions of my configure line: ./configure --with-udpportrange=850,855 --with-portrange=32800,32850 I used this on both client and server And my firewall (linux) looks like this: (IP numbers are not real) Internet firewallbackup server eth0 eth1 1.2.3.x1.2.3.1 10.0.0.110.0.0.2 My (relevant) iptables rules, from /etc/sysconfig/iptables (use these as input to iptables-restore) [0:0] -A PREROUTING -s 1.2.3.0/255.255.255.0 -d 1.2.3.1 -p tcp -m tcp --dport 10080 -j DNAT --to-destination 10.0.0.2 [0:0] -A PREROUTING -s 1.2.3.0/255.255.255.0 -d 1.2.3.1 -p udp -m udp --dport 10080 -j DNAT --to-destination 10.0.0.2 [0:0] -A PREROUTING -s 1.2.3.0/255.255.255.0 -d 1.2.3.1 -p udp -m udp --dport 850:855 -j DNAT --to-destination 10.0.0.2 [0:0] -A POSTROUTING -s 10.0.0.0/255.255.255.0 -o eth0 -j SNAT --to-source 1.2.3.1 This makes the eth0 firewall address redirect to the backup server on ports 10080, and 850-855, and the backup server masquerades to the internet as 1.2.3.1 Your amandahosts file needs the address of the firewall's public ip in it, and the disklist on the server needs the public IP of the outside clients. This may have a few extra bits in it, but it works just fine for me. Hope this helps. On Tuesday 15 January 2002 13:15, Nevin Kapur wrote: > I'm having some trouble setting up an Amanda client sitting in a DMZ > of a firewall to talk to an Amanda server sittin inside a firewall. > I've tried to follow the answer in the FAQ and also read the various > posts on amanda-users. However, I can't get it to work and some > questions till linger: > > 1. When the docs say pass --with-(udp)portrange=xxx,yyy to configure, > which configure are they talking about? The client or the server? > > 2. In John R. Jackon's post "Use of UDP/TCP ports in Amanda...", in > the secition titles "Firewalls and NAT", it says "Just pick user UDP > and TCP port ranges and build Amanda with them..." Again, is this on > the client side or the server side? Or both? > > 3. I've compiled Amanda with --with-portrange=4711,4715 > --with-udpportrange=850,854 on both client and server side, but when I > run amcheck, I get errors like: > > ERROR: xxx: [host : port 7062 not secure] > > where xxx is the name of the machine in the DMZ that I'm trying to > back up and is the name of our firewall/router, not the server > that sits inside it. > > I hope I am being clear. TIA > > -Nevin -- Rick Morris Network Manager WeDoHosting.com 101-4226 Commerce Circle Victoria BC V8Z 6N6 ph: +1 250 479 1595 fax: +1 250 479 1517 [EMAIL PROTECTED] http://www.wedohosting.com
Re: Amanda and firewall
Wow, you and I are at almost the exact same place with the same problem. I too am getting errors about port numbers that I didn't set up in the configuration, when I compiled amanda. I've been assuming that my firewall was translating port addresses in addition to IP addresses, but this doesn't seem possible or workable. For what it's worth, I compiled both the tapeserver and client copies of amanda with: ./configure --with-tcpportrange=10084,10100 --with-udpportrange=932,948 --with-user=amanda --with-group=disk --with-portrange=10084,10100 --without-server For the tapeserver. I left out the "--without-server". The errors I was getting referred to the 4 range (Sorry, don't have an exact copy. Will try to generate one tomorrow.). We use an Elron firewall here. Odd that we're both JHU, too. -Kevin Zembower >>> Nevin Kapur <[EMAIL PROTECTED]> 01/15/02 04:15PM >>> I'm having some trouble setting up an Amanda client sitting in a DMZ of a firewall to talk to an Amanda server sittin inside a firewall. I've tried to follow the answer in the FAQ and also read the various posts on amanda-users. However, I can't get it to work and some questions till linger: 1. When the docs say pass --with-(udp)portrange=xxx,yyy to configure, which configure are they talking about? The client or the server? 2. In John R. Jackon's post "Use of UDP/TCP ports in Amanda...", in the secition titles "Firewalls and NAT", it says "Just pick user UDP and TCP port ranges and build Amanda with them..." Again, is this on the client side or the server side? Or both? 3. I've compiled Amanda with --with-portrange=4711,4715 --with-udpportrange=850,854 on both client and server side, but when I run amcheck, I get errors like: ERROR: xxx: [host : port 7062 not secure] where xxx is the name of the machine in the DMZ that I'm trying to back up and is the name of our firewall/router, not the server that sits inside it. I hope I am being clear. TIA -Nevin
Amanda and firewall
I'm having some trouble setting up an Amanda client sitting in a DMZ of a firewall to talk to an Amanda server sittin inside a firewall. I've tried to follow the answer in the FAQ and also read the various posts on amanda-users. However, I can't get it to work and some questions till linger: 1. When the docs say pass --with-(udp)portrange=xxx,yyy to configure, which configure are they talking about? The client or the server? 2. In John R. Jackon's post "Use of UDP/TCP ports in Amanda...", in the secition titles "Firewalls and NAT", it says "Just pick user UDP and TCP port ranges and build Amanda with them..." Again, is this on the client side or the server side? Or both? 3. I've compiled Amanda with --with-portrange=4711,4715 --with-udpportrange=850,854 on both client and server side, but when I run amcheck, I get errors like: ERROR: xxx: [host : port 7062 not secure] where xxx is the name of the machine in the DMZ that I'm trying to back up and is the name of our firewall/router, not the server that sits inside it. I hope I am being clear. TIA -Nevin
Re: Amrecover still kicking my @$$
On Tuesday 15 January 2002 12:13 pm, you wrote: > Looks to me like xinetd is working correctly. If xinetd was refusing > the connection you wolkd never see the 220 prompt. > > Some sugesstions: > > 1. Does /.amandahosts on the tape server have a line like: > >localhost root >my.server root Yes, and then some. I've added "root" and "operator" lines for every conceivable valid address for the system. ("operator" is the dumpuser, and also the user that the amanda services run as under xinetd.) > 2. It may be necessary to add the server name to /etc/hosts. We use DNS instead of hosts, but, regardless, it's in hosts as well, with the same name that DNS reports. > 3. Try > ># amrecover -s localhost -t localhost -C Tried that too, same error message. When setting this up originally, I went through hell moving, renaming, chowning and chmodding files, as a number of things weren't readable by the "operator" user or were in the wrong place. (In fact, /var/lib/amanda/ and /etc/amanda/ are now the same directory, symlinked.) Is there any chance my thrashing about has screwed up some vital file there that could trigger this error? 'lsof' tells me nothing particularly useful, although I did learn that amindexd is writing debug logs: amindexd: debug 1 pid 2586 ruid 11 euid 11 start time Tue Jan 15 13:44:32 2002 amindexd: version 2.4.1p1 < 220 [server] AMANDA index server (2.4.1p1) ready. > ÿôÿý < 500 Access not allowed < 200 Good bye. amindexd: pid 2586 finish time Tue Jan 15 13:44:45 2002 Not very helpful to me. Is there any way to ratchet up the debug level?
Help! timeouts
I have been using amanda-2.4.2p2-4 (RPM) on linux (redhat 7.2) for some time with good results. Now they have moved my equipment so that there is a firewall between my clients and server. After the move I am getting data timeouts. I can't figure out what's the problem. Here is what the client says: sendbackup: debug 1 pid 927 ruid 33 euid 33 start time Mon Jan 14 21:13:27 2002 /usr/lib/amanda/sendbackup: version 2.4.2p2 sendbackup: got input request: GNUTAR / 0 1970:1:1:0:0:0 OPTIONS |;bsd-auth;index;exclude-list=.amanda.excludes; parsed request as: program `GNUTAR' disk `/' lev 0 since 1970:1:1:0:0:0 opt `|;bsd-auth;index;exclude-list=.amanda.excludes;' sendbackup: try_socksize: send buffer size is 65536 sendbackup: stream_server: waiting for connection: 0.0.0.0.54037 sendbackup: stream_server: waiting for connection: 0.0.0.0.54038 sendbackup: stream_server: waiting for connection: 0.0.0.0.54039 waiting for connect on 54037, then 54038, then 54039 sendbackup: stream_accept: connection from 10.84.192.253.34089 sendbackup: stream_accept: connection from 10.84.192.253.34090 sendbackup: stream_accept: connection from 10.84.192.253.34091 got all connections sendbackup-gnutar: doing level 0 dump as listed-incremental to /var/lib/amanda/gnutar-lists/rpppc1__0.new sendbackup-gnutar: doing level 0 dump from date: 1970-01-01 0:00:00 GMT sendbackup: spawning /usr/lib/amanda/runtar in pipeline sendbackup: argument list: gtar --create --file - --directory / --one-file-system --listed-incremental /var/lib/amanda/gnutar-lists/rpppc1__0.new --sparse --ignore-failed-read --totals --exclude-from //.amanda.excludes . sendbackup: started index creator: "/bin/tar -tf - 2>/dev/null | sed -e 's/^\.//'" sendbackup-gnutar: /usr/lib/amanda/runtar: pid 933 index tee cannot write [Connection timed out] sendbackup: pid 932 finish time Mon Jan 14 22:01:00 2002 error [/bin/tar got signal 13] sendbackup: pid 927 finish time Mon Jan 14 22:01:00 2002
Re: ** Error reading label on tape
> > > > amrestore ody shows: > > amrestore: could not open tape ody: No such file or directory > > That's not the proper syntax to amrestore. It needs to know the tape > device, i.e. 'amrestore /dev/rmt/0bn'. It doesn't care about a config > file. [2:20pm chip]# mt rewind [2:20pm chip]# amrestore /dev/rmt/0bn amrestore: missing file header block amrestore: WARNING: not at start of tape, file numbers will be offset amrestore: 0: reached end of tape: date > > any other methods of getting info from this tape. > > (dd ufsrestore etc)(I've tried, but no sucess) > > Are you using the correct device? Were you using hardware compression > before? Do you need the 'b' in there? Just going from memory from before, I've tried with and without the "b" > > Try this: > > mt rewind > mt fsf 1 (the first file on the tape is just an AMANDA tape header) > dd if=/dev/rmt/0bn of=image1.header bs=32k count=1 [2:20pm chip]# mt rewind [2:21pm chip]# mt fsf 1 [2:21pm chip]# dd if=/dev/rmt/0bn of=image1.header bs=32k count=1 0+0 records in 0+0 records out > > The file image1.header should then contain a string indicating the date, > time, level, client, filesystem, etc of the backup. You can grab the rest > of the image by doing: > > dd if=/dev/rmt/0bn of=image1 bs=32k [2:22pm chip]# dd if=/dev/rmt/0bn of=image1 bs=32k 0+0 records in 0+0 records out > Or, to just grab the image directly, do this in place of the first 'dd' > command above: > > dd if=/dev/rmt/0bn of=image1 bs=32k skip=1 [2:22pm chip]# dd if=/dev/rmt/0bn of=image1 bs=32k skip=1 dd: cannot skip past end-of-file I'm doing everything possible to not tell myself there is nothing on this tape. I admit I didn't run verify, but amanda reports from when it was working reported everything normal, took several hours to backup stuff (as expected) Brad
Supplement: Interactive Tar Restore?
One more thing: As a normal NT server, usually the hard drives are shared automagically, via a $ sign. therefore if you have C, D and E drive, they are "shared" respectively by //host/c$, //host/d$, and //host/e$. Assuming you got administrative privledge access. So i got lucky and was able to fully backup c$ like an ext2 partition. my disklist entry was: amanda-server //host/c$ user-tar now, here is the problem, I do want an interactive restore if such a possibility exists, but how the heck, do I restore that partition? right now if I just type in amrestore -p /dev/nst0 amanda-server |tar xvf - it will dump the contents of the tar in the current directory. But I could not tell if it would only do c$. and the output of amrestore while it spans through the 'files' shows that the NT server index is called hostname.__host_c$.date.0 so i tried: amrestore -p /dev/nst0 amanda-server hostname __host_c$ |tar xvf - but it just skips. i even tried c$, c\$, //host/c$ and so on.. just skips it. any ideas? =) Thanks!
Re: ** Error reading label on tape
Ah, the beauty of amanda. Use the no-rewind device and issue: mt -t fsf 1 That will skip the label header. Then you can use dump, GNUTAR or whatever backup utility you used and restore or list the tape using the no-rewind device. For GNUTAR that might be: tar xvf /dev/rmt/0n [EMAIL PROTECTED] said: > Help, please. > I had been running amanda 2.4.2p2 on a SunSparc Ultra 2 backing up to > a DLT2000 drive. > HD in system has crashed, and now I can't access my backup tape. > Have installed amanda again, and think I have things config'd same as > before. (but I don't have my original amanda.conf file, so I'm not > 100% sure) > any other methods of getting info from this tape. > (dd ufsrestore etc)(I've tried, but no sucess) --- Wayne Richards e-mail: [EMAIL PROTECTED]
Re: ** Error reading label on tape
On Tue, 15 Jan 2002 at 1:08pm, Brad Groshok wrote > I had been running amanda 2.4.2p2 on a SunSparc Ultra 2 > backing up to a DLT2000 drive. > > amverify ody shows: > Using device /dev/rmt/0bn > ** Error reading label on tape > Errors found: > 0+0 records in > 0+0 records out > > amrestore ody shows: > amrestore: could not open tape ody: No such file or directory That's not the proper syntax to amrestore. It needs to know the tape device, i.e. 'amrestore /dev/rmt/0bn'. It doesn't care about a config file. > any other methods of getting info from this tape. > (dd ufsrestore etc)(I've tried, but no sucess) Are you using the correct device? Were you using hardware compression before? Do you need the 'b' in there? Try this: mt rewind mt fsf 1 (the first file on the tape is just an AMANDA tape header) dd if=/dev/rmt/0bn of=image1.header bs=32k count=1 The file image1.header should then contain a string indicating the date, time, level, client, filesystem, etc of the backup. You can grab the rest of the image by doing: dd if=/dev/rmt/0bn of=image1 bs=32k Or, to just grab the image directly, do this in place of the first 'dd' command above: dd if=/dev/rmt/0bn of=image1 bs=32k skip=1 Then try ufsrestore on your image1 file. If you can't read the tapes from that device, try 0n, 0cn, or 0bcn. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Interactive Restore with Tar?
I am trying to restore an NT partition from backup. Naturally one has to use GNUTAR to be able to backup the filesystem. However is there an interactive method to restore from tar, or must I untar to a holding disk first, then grab the files I need. Also a while back there was mention of restoring using amrestore then piping to smbclient. However, it didn't work out for me. If I have to untar to holding disk first, this would lead me to a small problem in case my holding disk is a little smaller than the filesystem I backed up. Hence this is the reason why I would like an interactive method. Otherwise Dump works well with restore but not with tar. Thanks! Tanniel Simonian P/A III UC Riverside Libraries
Re: Amrecover has defeated me.
Looks to me like xinetd is working correctly. If xinetd was refusing the connection you wolkd never see the 220 prompt. Some sugesstions: 1. Does /.amandahosts on the tape server have a line like: localhostroot my.serverroot 2. It may be necessary to add the server name to /etc/hosts. 3. Try # amrecover -s localhost -t localhost -C On Tue, 15 Jan 2002, Eric Hillman wrote: - - And yes, I have read the FAQ. - - On my RedHat 7 server, Amrecover invariably dies with an "Unexpected server - end of file" error. All other amanda functions (amdump, acheck, alabel) work - great. I have the sneaking suspicion that xinetd is somehow to blame, mainly - because of this error message which shows up in /var/log/messages whenever I - run amrecover: - - Jan 15 11:05:25 www xinetd[27241]: refused connect from 206.101.101.101 - - (I changed the IP address, but it is the eth0 address, despite the fact that - I'm using 'amrecover -s localhost') - - but... /etc/hosts.allow is set up to allow access from my own host and every - conceivable alias or variation thereof. Ditto for .amandahosts. And - telnetting directly to the ports seems to work. I can even get an amandaidx - session going, after a fashion (the line in ALL CAPS is my input): - - [root@penguin tmp]# telnet localhost 10082 - Trying 127.0.0.1... - Connected to localhost.localdomain (127.0.0.1). - Escape character is '^]'. - 220 penguin AMANDA index server (2.4.1p1) ready. - I CANT SPEAK YOUR CRAZY MOON LANGUAGE - 500 Access not allowed - 200 Good bye. - Connection closed by foreign host. - - Any suggestions on what I'm missing, or how I can track down why this is - happening? - -- -- Stephen Carville UNIX and Network Administrator Ace Flood USA 310-342-3602 [EMAIL PROTECTED]
Re: problem to use smbclient (Samba)
answering my own question: > > * Amanda version : 2.4.2p2, > compiled with --with-smb-client=/full/path/... > I tried to recompile again, and checked the "configure" output: $ ./configure ... --with-smblient=/usr/local/samba/bin/smbclient ... (...) checking for smbclient... no (...) but, $ ls -al /usr/local/samba/bin/smbclient -rwxr-xr-x1 root root 574817 Nov 22 16:01 smbclient so, I guess configure doesn't parse the full path I give to him, but tries a default one instead. I then made a link to smbclient into /usr/bin (which is into the $PATH). Then, "configure" works: ... checking for smbclient... /usr/bin/smbclient ... and "amcheck" works for the NT box after recompiling Amanda.. So, is there some mistake into the doc or into the configure command, or did I miss something ? thanks, Pierre
** Error reading label on tape
Help, please. I had been running amanda 2.4.2p2 on a SunSparc Ultra 2 backing up to a DLT2000 drive. HD in system has crashed, and now I can't access my backup tape. Have installed amanda again, and think I have things config'd same as before. (but I don't have my original amanda.conf file, so I'm not 100% sure) amverify ody shows: Using device /dev/rmt/0bn ** Error reading label on tape Errors found: 0+0 records in 0+0 records out amrestore ody shows: amrestore: could not open tape ody: No such file or directory I think my tape was labeled tape1 Any suggestions would be GREATLY appreciated. any other methods of getting info from this tape. (dd ufsrestore etc)(I've tried, but no sucess) Tape should have 2 backups on it, (2 diff sun boxes) The one I'm looking for should be the first one on the tape. Regards: Brad Groshok ([EMAIL PROTECTED]) Odynet Inc London Ontario CanadaPhone: (519) 660- http://www.ody.ca Fax: (519) 660-6770
problem to use smbclient (Samba)
hello, I encounter a trouble when trying to configure my Amanda server to backup NT hosts through the Samba suite: * Amanda index/tape server: Linux RH7.1 (the server is also an Amanda client) * Amanda version : 2.4.2p2, compiled with --with-smb-client=/full/path/... * Samba version : 2.2 * amanda user : bin.disk in the following dump for amcheck, the Amanda index/tape server is called "myServer", the NT box is called "thePC". The amanda config is called "pc". ### [root@myServer /usr]# su -c "/usr/local/amanda/sbin/amcheck -c -s pc" bin Amanda Tape Server Host Check - ... (OK) Amanda Backup Client Hosts Check ERROR: myServer: [The client is not configured for samba://thePC/sharename] ERROR: myServer: [SMBCLIENT program not available] Client check: 1 host checked in 0.016 seconds, 2 problems found (brought to you by Amanda 2.4.2) ### /etc/amandapass is: //myPC/sharenameadministrateur%theRightPassword ### disklist for "pc" is: myServer //myPC/sharename pc_settings # I must also note that the share is reachable via the smbclient command used as a shell command, and file permissions for Amanda user seems correct. ...but amcheck keeps sending this error! May anyone give me some help? Did I forget something when building Amanda?.. thanks, Pierre
Amrecover has defeated me.
And yes, I have read the FAQ. On my RedHat 7 server, Amrecover invariably dies with an "Unexpected server end of file" error. All other amanda functions (amdump, acheck, alabel) work great. I have the sneaking suspicion that xinetd is somehow to blame, mainly because of this error message which shows up in /var/log/messages whenever I run amrecover: Jan 15 11:05:25 www xinetd[27241]: refused connect from 206.101.101.101 (I changed the IP address, but it is the eth0 address, despite the fact that I'm using 'amrecover -s localhost') but... /etc/hosts.allow is set up to allow access from my own host and every conceivable alias or variation thereof. Ditto for .amandahosts. And telnetting directly to the ports seems to work. I can even get an amandaidx session going, after a fashion (the line in ALL CAPS is my input): [root@penguin tmp]# telnet localhost 10082 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. 220 penguin AMANDA index server (2.4.1p1) ready. I CANT SPEAK YOUR CRAZY MOON LANGUAGE 500 Access not allowed 200 Good bye. Connection closed by foreign host. Any suggestions on what I'm missing, or how I can track down why this is happening?
Re: ["data write: File too large"]
Pedro Aguayo wrote: > > Ok, I didn't but think I do now. > Basically when amanda write to the holding disk, it rights it to a flat file > on the file system, and if that flat file is larger than 2gb then you might > encounter a problem if your filesystem has a limitation where it can only > support files <2gb. > But if you write directly to tape you will avoid this problem cause you are > bypassing the filesystem. And that's why the paramater "chunksize" for the holding disk can be set to e.g. 1Gbyte. And, when everything else fails, read the manual pages. :-) > > Right Adrian? > > Hope I got it right, but this makes sense. -- Paul Bijnens, Lant Tel +32 16 40.51.40 Interleuvenlaan 15 H, B-3001 Leuven, BELGIUM Fax +32 16 40.49.61 http://www.lant.com/ email: [EMAIL PROTECTED] *** * I think I've got the hang of it now: exit, ^D, ^C, ^\, ^Z, ^Q, F6, * * quit, ZZ, :q, :q!, M-Z, ^X^C, logoff, logout, close, bye, /bye, * * stop, end, F3, ~., ^]c, +++ ATH, disconnect, halt, abort, hangup, * * PF4, F20, ^X^X, :D::D, KJOB, F14-f-e, F8-e, kill -1 $$, shutdown, * * kill -9 1, Alt-F4, Ctrl-Alt-Del, AltGr-NumLock, Stop-A, ...* * ... "Are you sure?" ... YES ... Phew ... I'm out * ***
disk offline problem
Good afternoon, Well, when i execute "dump..." i receive the next e-mail FAILURE AND STRANGE DUMP SUMMARY: sc01us0105 /software/amanda/share/ lev 0 FAILED [disk /software/amanda/share/ offline on sc01us0105?] DUMP SUMMARY: DUMPER STATSTAPER STATS HOSTNAME DISKL ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s -- - frostis.cf.j -a-2.4.2p2/ 1 14336 14336 --0:18 783.3 0:062374.2 sc01us0105 -nda/share/ 0 FAILED --- (brought to you by Amanda version 2.4.2p2) could anybody tell me what happen is? and why? My test disklist is sc01us0105 /software/amanda/share/ nocomp-test 0 frostis.cf.jcyl.es /export/home/setup/freeware/amanda-2.4.2p2/ nocomp-test 0 i'm using DUMP, if i use GNUTAR everythings gone fine. --- Javier Fernández Pérez Servicio de Informática Corporativa D.G. de Telecomunicaciones y Transportes Consejería de Fomento - Junta de Castilla y León Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) e-mail: [EMAIL PROTECTED] ---
RE: ["data write: File too large"]
Ok, I didn't but think I do now. Basically when amanda write to the holding disk, it rights it to a flat file on the file system, and if that flat file is larger than 2gb then you might encounter a problem if your filesystem has a limitation where it can only support files <2gb. But if you write directly to tape you will avoid this problem cause you are bypassing the filesystem. Right Adrian? Hope I got it right, but this makes sense. Pedro -Original Message- From: Wayne Richards [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 10:12 AM To: Adrian Reyer Cc: Pedro Aguayo Subject: Re: ["data write: File too large"] I don't understand the problem. When amanda encounters a filesystem larger than the holding disk, she AUTOMATICALLY resorts to direct tape write. Quoting from the amanda.conf file: # If no holding disks are specified then all dumps will be written directly # to tape. If a dump is too big to fit on the holding disk than it will be # written directly to tape. If more than one holding disk is specified then # they will all be used round-robin. We routinely backup filesystems larger than our holding disks and many files > 4GB. > On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > > But, I think Edwin doesn't have this problem, meaning he says he doesn't > > have a file larger than 2gb. > > I had none, either, but the filesystem was dumped into a file as a > whole, leading to a huge file, same with tar. The problem only occurs > as holding-disk is used. Continuously writing a stream of unlimited > size to a tape is no problem, but as soon as you try to do this onto a > filesytem, you run in whatever limits you have, mostly 2GB-limits on a > single file. > No holding-disk -> no big file -> no problem. (well, tape might have > to stop more often because of interruption in data-flow) > > Regards, > Adrian Reyer > -- > Adrian Reyer Fon: +49 (7 11) 2 85 19 05 > LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 > Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] > Linux, Netzwerke, Consulting & Support http://lihas.de/ --- Wayne Richards e-mail: [EMAIL PROTECTED]
Re: SUMMARY: Re: onstream adr50 problems
I have got an email from OnStream support today. They say that they are *not* planning to release a new firmware for the ADR50 or the ADR30 at the moment. Thus, no support for amanda. They claim that their new generation drives, the ADR2 series, do not have this problem. Moritz
Re: ["data write: File too large"]
On 15-Jan-2002 Adrian Reyer wrote: > > No holding-disk -> no big file -> no problem. (well, tape might have > to stop more often because of interruption in data-flow) > Why not define a chunksize of 500 Mb on your holdingdisk? That's what I did. Backups go faster and there's less wear and tear on my tapestreamer. -- |Hans Kinwel | [EMAIL PROTECTED]
RE: ["data write: File too large"]
"I see!" said the blind carpenter, who picked up his hammer and saw. -My ninth grade science teacher, Brother Paul, a terrible punner. -Kevin >>> "Pedro Aguayo" <[EMAIL PROTECTED]> 01/15/02 09:45AM >>> Ahh! I see said the blind man. Pedro -Original Message- From: Adrian Reyer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 9:24 AM To: Pedro Aguayo Cc: Rivera, Edwin; [EMAIL PROTECTED] Subject: Re: ["data write: File too large"] On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > But, I think Edwin doesn't have this problem, meaning he says he doesn't > have a file larger than 2gb. I had none, either, but the filesystem was dumped into a file as a whole, leading to a huge file, same with tar. The problem only occurs as holding-disk is used. Continuously writing a stream of unlimited size to a tape is no problem, but as soon as you try to do this onto a filesytem, you run in whatever limits you have, mostly 2GB-limits on a single file. No holding-disk -> no big file -> no problem. (well, tape might have to stop more often because of interruption in data-flow) Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
Re: XFS, Linux, and Amanda
In a message dated: Mon, 14 Jan 2002 18:41:04 EST "Brandon D. Valentine" said: >On Mon, 14 Jan 2002, Joshua Baker-LePain wrote: > >>Hmmm, you did 'rpm -e' the RPM version, right? Pre-build amanda=bad. > >Word. Especially the moronic way in which RedHat has decided to build it. I'm glad I'm not the only one who thought this! I've had arguments with others over this issue, where they feel that you should always use the pre-build package if there's one available! -- Seeya, Paul God Bless America! If you're not having fun, you're not doing it right! ...we don't need to be perfect to be the best around, and we never stop trying to be better. Tom Clancy, The Bear and The Dragon
RE: ["data write: File too large"]
Ahh! I see said the blind man. Pedro -Original Message- From: Adrian Reyer [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 9:24 AM To: Pedro Aguayo Cc: Rivera, Edwin; [EMAIL PROTECTED] Subject: Re: ["data write: File too large"] On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > But, I think Edwin doesn't have this problem, meaning he says he doesn't > have a file larger than 2gb. I had none, either, but the filesystem was dumped into a file as a whole, leading to a huge file, same with tar. The problem only occurs as holding-disk is used. Continuously writing a stream of unlimited size to a tape is no problem, but as soon as you try to do this onto a filesytem, you run in whatever limits you have, mostly 2GB-limits on a single file. No holding-disk -> no big file -> no problem. (well, tape might have to stop more often because of interruption in data-flow) Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
Re: ["data write: File too large"]
On Tue, Jan 15, 2002 at 09:14:33AM -0500, Pedro Aguayo wrote: > But, I think Edwin doesn't have this problem, meaning he says he doesn't > have a file larger than 2gb. I had none, either, but the filesystem was dumped into a file as a whole, leading to a huge file, same with tar. The problem only occurs as holding-disk is used. Continuously writing a stream of unlimited size to a tape is no problem, but as soon as you try to do this onto a filesytem, you run in whatever limits you have, mostly 2GB-limits on a single file. No holding-disk -> no big file -> no problem. (well, tape might have to stop more often because of interruption in data-flow) Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
RE: ["data write: File too large"]
That's a great Idea, I'm going to save this one. But, I think Edwin doesn't have this problem, meaning he says he doesn't have a file larger than 2gb. Could be hidden, or maybe you mounted over a directory that had a huge file, just digging here. Pedro -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Adrian Reyer Sent: Tuesday, January 15, 2002 4:03 AM To: Pedro Aguayo Cc: Rivera, Edwin; [EMAIL PROTECTED] Subject: Re: ["data write: File too large"] On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote: > Could be that your holding disk space is to small, or you trying to backup a > file that is larger than 2 gigs? Perhaps I misunderstand something here, but... The holding disk afaik holds the entire dump of the filesystem you try and backup to make it one last file that is able to get faster onto tape once completed. So if your partition has more than 2GB in use, that file might be bigger than 2GB and you run into a filesystem limit. Had that problem with an older Linux installation, turning off holding-disk and dumping directly to tape works fine in that case. Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/
Re: disk offline
On 15 Jan 2002 at 2:47pm, [EMAIL PROTECTED] wrote > FAILURE AND STRANGE DUMP SUMMARY: > sc01us0105 /software/amanda/share/ lev 0 FAILED [disk >/software/amanda/share/offline on sc01us0105?] > frostis.cf /export/home/setup/freeware/amanda-2.4.2p2/ lev 0 FAILED [disk >/export/home/setup/freeware/amanda-2.4.2p2/ offline on frostis.cf.jcyl.es?] > > Does anybody know why? > > My disklist is > > #hostname diskdevice ldumptype spindle interface > sc01us0105 /software/amanda/share/ nocomp-test 0 > frostis.cf.jcyl.es /export/home/setup/freeware/amanda-2.4.2p2/ >comp-user-tar 0 > > And i want to usr GNUTAR. What's in sendsize*debug, amandad*debug, and/or sendbackup*debug on the failing clients? Do they have GNUtar installed? Was it there when you ./configured amanda? -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump, backup a directory.
On Tue, 15 Jan 2002, Joshua Baker-LePain wrote: > On Tue, 15 Jan 2002 at 2:36pm, Matteo Centonza wrote > > > On Tue, 15 Jan 2002, Joshua Baker-LePain wrote: > > > > Hmm, if I remember correctly, i've made subtree dump in the past with > > amanda. BTW there's that's from ext2fs dump manpage: > > > *snip* > > > > so it's possible (with some restrictions e.g. only full dumps). The same > > applies for xfsdump: ^^^ > > ~~~ > Yes, and that's the limitation. When amanda tries to do incrementals, it > will fail when dump tells it that it can't do that. So if you're only > doing full dumps, then dump is fine. But, for incrementals of > subdirectories, you *must* use tar. Yes, only full dumps. BTW, if you have to backup things like extended attributes, quota information, ACLs etc. (ala xfsdump), tar "tout court" it's not a viable way. Bye, -m
disk offline
I'm here again, When i execute amdump, the backup fails FAILURE AND STRANGE DUMP SUMMARY: sc01us0105 /software/amanda/share/ lev 0 FAILED [disk /software/amanda/share/offline on sc01us0105?] frostis.cf /export/home/setup/freeware/amanda-2.4.2p2/ lev 0 FAILED [disk /export/home/setup/freeware/amanda-2.4.2p2/ offline on frostis.cf.jcyl.es?] Does anybody know why? My disklist is #hostname diskdevice ldumptype spindle interface sc01us0105 /software/amanda/share/ nocomp-test 0 frostis.cf.jcyl.es /export/home/setup/freeware/amanda-2.4.2p2/ comp-user-tar 0 And i want to usr GNUTAR. Thanks --- Javier Fernández Pérez Servicio de Informática Corporativa D.G. de Telecomunicaciones y Transportes Consejería de Fomento - Junta de Castilla y León Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) e-mail: [EMAIL PROTECTED] ---
Re: amdump, backup a directory.
On Tue, 15 Jan 2002 at 2:36pm, Matteo Centonza wrote > On Tue, 15 Jan 2002, Joshua Baker-LePain wrote: > > Hmm, if I remember correctly, i've made subtree dump in the past with > amanda. BTW there's that's from ext2fs dump manpage: > *snip* > > so it's possible (with some restrictions e.g. only full dumps). The same > applies for xfsdump: > Yes, and that's the limitation. When amanda tries to do incrementals, it will fail when dump tells it that it can't do that. So if you're only doing full dumps, then dump is fine. But, for incrementals of subdirectories, you *must* use tar. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump, backup a directory.
On Tue, 15 Jan 2002, Joshua Baker-LePain wrote: > On 15 Jan 2002 at 1:22pm, [EMAIL PROTECTED] wrote > > > Amanda cans make backups for a directory or only file systems? > > For example backup /export/home/amanda > > If you use dump as your backup program, then it can only do file systems. > > If you use tar, then you can do directories. > Hmm, if I remember correctly, i've made subtree dump in the past with amanda. BTW there's that's from ext2fs dump manpage: [snip] file-to-dump is either a mountpoint of a filesystem or a directory to be backed up as a subset of a filesystem. In the former case, either the path to a mounted filesystem or the device of an unmounted filesystem can be used. In the latter case, certain restrictions are placed on the backup: -u is not allowed and the only dump level that is supported is -0. [snip] so it's possible (with some restrictions e.g. only full dumps). The same applies for xfsdump: ... -s pathname [ -s pathname ... ] Restricts the dump to files contained in the speci- fied pathnames (subtrees). Up to 100 pathnames can be specified. A pathname must be relative to the mount point of the filesystem. For example, if a filesystem is mounted at /d2, the pathname argument for the directory /d2/users is ``users''. A pathname can be a file or a directory; if it is a directory, the entire hierarchy of files and subdirectories rooted at that directory is dumped. Subtree dumps cannot be used as the base for incremental dumps (see the -l option above). ... HTH, -m >
Re: gnu tar question?
[EMAIL PROTECTED] wrote: > hi everyone again, > Does anybody know how can i use gnu tar to make incremental backups? > Is it posible? In the amanda.conf file there is a section called 'dumptype'. All the different kind of dumps are defined. GNUTAR dumps usually have names that include 'tar'. In the disklist file, select a dumptype that includes tar in its name: client.localnet /home/users comp-user-tar That should do it. > > Thanks. > > > --- > Javier Fernández Pérez > Servicio de Informática Corporativa > D.G. de Telecomunicaciones y Transportes > > Consejería de Fomento - Junta de Castilla y León > Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) > > e-mail: [EMAIL PROTECTED] > --- > > -- David T. Smith PGP Fingerprint: 7B01 0086 BC4E C092 5348 B9AE E79A 07F2 9E59 29C2 ph: 1 203 364 1796 fax: 1 203 364 1795 cell: 1 203 770 1685 E-mail [EMAIL PROTECTED]
Re: amdump, backup a directory.
[EMAIL PROTECTED] wrote: > Hi everyone, > Amanda cans make backups for a directory or only file systems? > For example backup /export/home/amanda It depends on the backup program selected. GNUTAR will backup starting from any directory, while dump can only backup a complete filesystem. DTS > > Thanks > > --- > Javier Fernández Pérez > Servicio de Informática Corporativa > D.G. de Telecomunicaciones y Transportes > > Consejería de Fomento - Junta de Castilla y León > Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) > > e-mail: [EMAIL PROTECTED] > --- > > -- David T. Smith PGP Fingerprint: 7B01 0086 BC4E C092 5348 B9AE E79A 07F2 9E59 29C2 ph: 1 203 364 1796 fax: 1 203 364 1795 cell: 1 203 770 1685 E-mail [EMAIL PROTECTED]
amrecover
Hello Everyone, I have about 60 MS clients that AMANDA backs up. I had trouble figuring out the best way to offer individual/small file recovery to the users until a friend suggested that I install apache on the AMANDA server and do recovery thru the web. The apache web server has a directory for each user. Each directory is passwd protected, and apache uses SSL to encrypt the data transfer. I've found that the web works great if the user needs to recover 5-10 files, but I think large directories and multiple files would be better recovered thru ftp. Anyway, all I have to do is cd to the appropriate users web directory and run amrecover from there. I then email the user alerting them that their files are ready and will be available for the rest of the day, and that's it. I've written a small script to go thru and empty all the user directories each night. This setup seems to work pretty well, I was just curious as to how other admins handle MS file recovery?
Re: gnu tar question?
On 15 Jan 2002 at 1:43pm, [EMAIL PROTECTED] wrote > Does anybody know how can i use gnu tar to make incremental backups? > Is it posible? > Just tell amanda to use 'program "GNUTAR"' in your dumptype. amanda will take care of the rest. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: XFS, Linux, and Amanda
On Mon, 14 Jan 2002 at 8:51pm, Gene Heskett wrote > Humm, what I see is that you are using linux dump, specifically > built for the ext2 filesystem, on a XFS filesystem? Donnbesilly. No. Amanda checks the fstab for filesystem type, and uses the appropriate dump tool for each filesystem (given that it was present when amanda was compiled). So, on Linux, amanda will use xfsdump when appropriate. The problem (we think) is that Dan had tried to use the RPMs, then hadn't quite cleaned them out before trying his own version. > Use tar instead, its *lots* more dependable anyway. > Than Linux dump -- yep. Than xfsdump? Well, xfsdump may be weird, but it's got years of service on IRIX. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amdump, backup a directory.
On 15 Jan 2002 at 1:22pm, [EMAIL PROTECTED] wrote > Amanda cans make backups for a directory or only file systems? > For example backup /export/home/amanda If you use dump as your backup program, then it can only do file systems. If you use tar, then you can do directories. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
gnu tar question?
hi everyone again, Does anybody know how can i use gnu tar to make incremental backups? Is it posible? Thanks. --- Javier Fernández Pérez Servicio de Informática Corporativa D.G. de Telecomunicaciones y Transportes Consejería de Fomento - Junta de Castilla y León Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) e-mail: [EMAIL PROTECTED] ---
amdump, backup a directory.
Hi everyone, Amanda cans make backups for a directory or only file systems? For example backup /export/home/amanda Thanks --- Javier Fernández Pérez Servicio de Informática Corporativa D.G. de Telecomunicaciones y Transportes Consejería de Fomento - Junta de Castilla y León Rigoberto Cortejoso, 14. 47014 Valladolid (Spain) e-mail: [EMAIL PROTECTED] ---
Re: ["data write: File too large"]
On Mon, Jan 14, 2002 at 10:53:29AM -0500, Pedro Aguayo wrote: > Could be that your holding disk space is to small, or you trying to backup a > file that is larger than 2 gigs? Perhaps I misunderstand something here, but... The holding disk afaik holds the entire dump of the filesystem you try and backup to make it one last file that is able to get faster onto tape once completed. So if your partition has more than 2GB in use, that file might be bigger than 2GB and you run into a filesystem limit. Had that problem with an older Linux installation, turning off holding-disk and dumping directly to tape works fine in that case. Regards, Adrian Reyer -- Adrian Reyer Fon: +49 (7 11) 2 85 19 05 LiHAS - Servicebuero SuttgartFax: +49 (7 11) 5 78 06 92 Adrian Reyer & Joerg Henner GbR Mail: [EMAIL PROTECTED] Linux, Netzwerke, Consulting & Support http://lihas.de/