amverify doesn't run
Hello again!, I had a successful backup run last night as shown in the log.20001128.0 file. 8-) When I run amverify It returns: Using device /dev/nst0 Waiting for device to go ready... which never happens. It sits there like that 'till I Ctrl-C it. What might be causing this? Is there an easier way to install the client piece than the whole configure/make/make install process? Randy __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/
configure error: connot find output from lex; giving up
I just downloaded Amanda so i'm totally new to it. As i run the configure script it aborted with the message: checking lex output-file root... ./configure: lex: command not found configure: error: cannot find output from lex; giving up Can somoene help me with that? Wich packet do i need and where can i get it from? Thanks for any advise. Tom
Re: configure error: connot find output from lex; giving up
On Nov 29, 2000, Tom Hofmann <[EMAIL PROTECTED]> wrote: > Can somoene help me with that? Wich packet do i need and where can i get it > from? Try flex. It's available at ftp.gnu.org. Or try Amanda 2.4.2, which, unless I'm mistaken, is supposed to not require lex at all. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
Re: configure error: connot find output from lex; giving up
Thank you. I installed it and now configure is running fine. Tom
Re: Client install
On Tue, 28 Nov 2000, Randolph Cordell wrote: > How is installing for the clients different than for the server? That is not > evident in anything I've read (README, INSTALL and the entire chapter online > at www.amanda.org). Do I need to do the whole ./configure, make, make > install process for each client? IT seems that's massive overkill. You can configure them --without-server. Otherwise, it's pretty much the same.
Re: HP-UX compiles
On Tue, 28 Nov 2000, Stephen Walton wrote: > I fixed my own problem, added it to the FAQ-O-Matic (I hope that's OK), > and humbly submit a correction for the HP-UX section of SYSTEM.NOTES. I > think the first paragraph should read: > > You may run into an internal /bin/sh limit when running the configure > script. The error message is: > > ./configure: sh internal 2K buffer overflow > > As of HP-UX 10.20, no such message is printed; instead you get an error > about sed's failure to parse a pattern. The workaround is to use ksh: > change the first line in the configure script from "#! /bin/sh" to > "#! /usr/bin/ksh". Those should naturally be without any space between ! and /.
Re: Completely Stuck :-(
>> bash-2.03# lsof -i | grep am >> inetd 19686 root 11u IPv4 0xe1dfac2c0t0 UDP *:amanda (Idle) > > See, it's not (LISTEN), which means inetd has disabled the service > because of a previous failure. kill -HUP it and see if the state > changes. I hate to disagree with you, but in my experience Solaris always reports UDP ports as idle. For instance, another machine of mine comes up with the following lsof output: syslogd 177 root4u inet 0x609b5240 0t0 UDP *:syslog (Idle) Yet I know for a fact that syslogd is working 100% OK on this box! - John
unsubscribe
unsubscribe -- Longina Przybyszewska, system programmerPhone: +45 6550 2359 Dept. of Math. & Comp. Sci. SDU, Odense University, Campusvej 55Email:[EMAIL PROTECTED] DK-5230 Odense M, Denmark --
Re: Completely Stuck :-(
On Nov 29, 2000, John Cartwright <[EMAIL PROTECTED]> wrote: >> See, it's not (LISTEN), which means inetd has disabled the service > I hate to disagree with you, but in my experience Solaris always > reports UDP ports as idle. I stand corrected. Thank you and JJ for pointing out my mistake. -- Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/ Red Hat GCC Developer aoliva@{cygnus.com, redhat.com} CS PhD student at IC-Unicampoliva@{lsd.ic.unicamp.br, gnu.org} Free Software Evangelist*Please* write to mailing lists, not to me
re: port 1024 is not secure
Thank you for your help. i have updated to amanda 2.4.2. The System runs, but i must configure a few things. Thank you Roshan
Backup up oracle database, part II
Hi all! Thanks to those of you who helped me getting a backup for an oracle database working. Now, the solution to copy the db files to a temp storage and then have amanda back up from there was good - it works! There is a problem however, and that is that the copy command consumes a LOT of system resources (mainly disk of course) and my server almost look dead from time to time. I've tried nice -n 15 to make the cp command a bit more nice, but it doesn't seem to work... Any suggestions? This is a RedHat Linux 6.2 system. Thanks! /Fredrik Persson
unsubscribe
unsubscribe
RE: missing result for...
I don't receive any errors when running amcheck -c Amanda Backup Client Hosts Check Client check: 1 host checked in 0.303 seconds, 0 problems found Here are the other debug files: = --- sendsize.debug -- = sendsize: debug 1 pid 24669 ruid 11 euid 11 start time Wed Nov 29 00:45:01 2000 /usr//libexec/sendsize: version 2.4.2 = - = = -- selfcheck.debug -- = selfcheck: debug 1 pid 24046 ruid 11 euid 11 start time Tue Nov 28 16:00:01 2000 /usr//libexec/selfcheck: version 2.4.2 checking disk /var/log: device /var/log: OK checking disk /etc: device /etc: OK checking disk /home: device /home: OK selfcheck: pid 24046 finish time Tue Nov 28 16:00:01 2000 = - = = --- Amcheck.debug --- = amcheck: debug 1 pid 24042 ruid 11 euid 0 start time Tue Nov 28 16:00:00 2000 amcheck: pid 24042 finish time Tue Nov 28 16:00:18 2000 = - = = --- Amandad.debug --- = amandad: debug 1 pid 24667 ruid 11 euid 11 start time Wed Nov 29 00:45:00 2000 amandad: version 2.4.2 amandad: build: VERSION="Amanda-2.4.2" amandad:BUILT_DATE="Mon Nov 27 11:31:27 EST 2000" amandad:BUILT_MACH="Linux foo.bar.com 2.2.16-22 #1 Tue Aug 22 16:16:55 EDT 2000 i586 unknown" amandad:CC="gcc" amandad: paths: bindir="/usr//bin" sbindir="/usr//sbin" amandad:libexecdir="/usr//libexec" mandir="/usr//man" amandad:AMANDA_TMPDIR="/tmp/amanda" AMANDA_DBGDIR="/tmp/amanda" amandad:CONFIG_DIR="/usr//etc/amanda" DEV_PREFIX="/dev/" amandad:RDEV_PREFIX="/dev/" DUMP="/sbin/dump" amandad:RESTORE="/sbin/restore" SAMBA_CLIENT="/usr/bin/smbclient" amandad:GNUTAR="/bin/gtar" COMPRESS_PATH="/usr/bin/gzip" amandad:UNCOMPRESS_PATH="/usr/bin/gzip" MAILER="/usr/bin/Mail" amandad:listed_incr_dir="/usr//var/amanda/gnutar-lists" amandad: defs: DEFAULT_SERVER="foo.bar.com" amandad:DEFAULT_CONFIG="DailySet1" amandad:DEFAULT_TAPE_SERVER="foo.bar.com" amandad:DEFAULT_TAPE_DEVICE="/dev/null" HAVE_MMAP HAVE_SYSVSHM amandad:LOCKING=FLOCK SETPGRP_VOID DEBUG_CODE BSD_SECURITY amandad:USE_AMANDAHOSTS CLIENT_LOGIN="operator" FORCE_USERID amandad:HAVE_GZIP COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast" amandad:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc" got packet: Amanda 2.4 REQ HANDLE 000-78790708 SEQ 975476700 SECURITY USER operator SERVICE sendsize OPTIONS maxdumps=1;hostname=localhost; GNUTAR /var/log 0 1970:1:1:0:0:0 -1 exclude-list=/usr/local/lib/amanda/exclude.gtar GNUTAR /home 0 1970:1:1:0:0:0 -1 exclude-list=/usr/local/lib/amanda/exclude.gtar sending ack: Amanda 2.4 ACK HANDLE 000-78790708 SEQ 975476700 bsd security: remote host localhost.localdomain user operator local user operator amandahosts security check passed amandad: running service "/usr//libexec/sendsize" sendsize: reading /etc/amandates: Is a directory amandad: sending REP packet: Amanda 2.4 REP HANDLE 000-78790708 SEQ 975476700 = - = Thanks again for your time. -Original
Re: amrecover
Brian and Alexandre! > > but once the process is finished, i get back the the amrecover> > > prompt, and i cannot find the stuff i wanted to be restored. > > Note that stuff will be restored into a directory tree that mirrors > the tree of the backed up filesystem. So, if you restore bar/baz that > was originally in /foo, where /foo is the root of a filesystem (or a > subdirectory of / listed in the disklist), amrecover will get you > `bar/baz, not just `baz. The question is, does ANYTHING at all appear? Perhaps you could use another virtual terminal to do an `ls' after the recover returns a promptyou could use amrestore /dev/st0 hostname diskname and see what that turns up. Naturally you'd substitute the values I put in... DL -- Don't you find it rather touching to behold The OS that came in from the cold Seen for what it is: religion, plus finesse Countries, creeds, mean nothing - only Linux...
Re: Client install
I agree, this topic is largely uncovered in the documentation. I spent several days trying to figure out how to set things up, until I realized that amanda had to be installed in full on the client machines as well. I had incorrectly assumed that amanda used some kind of UNIX networking to suck the data from the client computers, but I was confused as to why I never had to specify any authentication to be able to access those computers. As for massive overkill, it only installs less than a meg of binaries on each client. Not too bad. Compare that with Windows bloatware, and it's microscopic. :) Eric Wadsworth On Wed, 29 Nov 2000, Harri Haataja wrote: > On Tue, 28 Nov 2000, Randolph Cordell wrote: > > > How is installing for the clients different than for the server? That is not > > evident in anything I've read (README, INSTALL and the entire chapter online > > at www.amanda.org). Do I need to do the whole ./configure, make, make > > install process for each client? IT seems that's massive overkill. > > You can configure them --without-server. Otherwise, it's pretty much the > same. > > >
Re: Client install
Check the docs/INSTALL file. I'm not sure but I think the compile option is something like --without-server And you just follow the client install procedure for the appropriate inetd.conf or xinetd.conf entry. On Wed, 29 Nov 2000, Eric Wadsworth wrote: > I agree, this topic is largely uncovered in the documentation. I spent > several days trying to figure out how to set things up, until I realized > that amanda had to be installed in full on the client machines as well. I > had incorrectly assumed that amanda used some kind of UNIX networking to > suck the data from the client computers, but I was confused as to why I > never had to specify any authentication to be able to access those > computers. > > As for massive overkill, it only installs less than a meg of binaries on > each client. Not too bad. Compare that with Windows bloatware, and it's > microscopic. :) > > Eric Wadsworth > > On Wed, 29 Nov 2000, Harri Haataja wrote: > > > On Tue, 28 Nov 2000, Randolph Cordell wrote: > > > > > How is installing for the clients different than for the server? That is not > > > evident in anything I've read (README, INSTALL and the entire chapter online > > > at www.amanda.org). Do I need to do the whole ./configure, make, make > > > install process for each client? IT seems that's massive overkill. > > > > You can configure them --without-server. Otherwise, it's pretty much the > > same. > > > > > > >
Re: [Fwd: Problems with amanda and Red-Hat 7.0]
Hi, your suggestion partially worked. I added a line like : ALL : localhost 127.0.0.1 128.197.61.90 in hosts.allow (the fact is that, with the release of xinetd, you can filter packets based on the service that is being requested either with the hosts.allow file or with the xinetd configuration files in /etc/xinetd.d ... which I think is quite confusing!!!) here is the output I get running "amcheck -c DailySet1" : Amanda Backup Client Hosts Check protocol packet receive: Connection refused protocol packet receive: Connection refused WARNING: raffaello: selfcheck request timed out. Host down? Client check: 1 host checked in 29.997 seconds, 1 problem found. (brought to you by Amanda 2.4.1p1) Going through the /var/log/messages file I have found the following record : Nov 29 11:41:56 raffaello xinetd[16241]: xinetd Version 2.1.8.9pre11 started with Nov 29 11:41:56 raffaello xinetd[16241]: libwrap Nov 29 11:41:56 raffaello xinetd[16241]: options compiled in. Nov 29 11:41:56 raffaello xinetd[16241]: Started working: 8 available services Nov 29 11:41:59 raffaello xinetd: xinetd startup succeeded Nov 29 11:42:04 raffaello xinetd[16241]: amanda service was deactivated because of looping Nov 29 11:42:04 raffaello xinetd[16241]: recv: Bad file descriptor (errno = 9) Anybody knows what's a "looping" in this context??? Thank you Bye, Antonino Casile
tapetype data for OnStream ADR50
Hi! Heureka, I got it. After spending a week with the tapetype version of Amanda 2.4.1p1 I tried the one of 2.4.2. And it finished after one day. So here are the data: define tapetype ADR50 { comment " just produced by tapetype program" length 23616 mbytes filemark 32 kbytes speed 563 kps } Greetings Olaf
unsubscribe
unsubscribe
DUMPed /home trumps TARred /home/tim in index (fwd)
I'm still tinkering with AMANDA. I've pretty much committed to GNUTAR, because I'm on an HP with logical volumes and can't get dump to work on the clients. At one point, I tried dump on /home on the server and it worked. Since then, I've been using TAR on /home/tim. Today I deleted a file and tried to restore it, but according to amrecover, it had not been backed up, as it was insisting setdisk be /home, which hadn't been backed up lately. The file ~had~ been backed up in the normal tar run. Do I need to wipe out my past flirtations with DUMP from the index before I can get TAR to work? Is fidelity critical? Can I get that DUMP expunged from my record? BTW What was the final verdict on the "--listed-incremental" and "--incremental" flurry recently?
amflush question
Hi everyone: I have a question about amflush. If i have huge data in my holding disk and one tape is not enough how can I do to use amflush with more than one tape? Please, any help would be appreciated. Sandra
Re: amflush question
>I have a question about amflush. If i have huge data in my holding disk >and one tape is not enough how can I do to use amflush with more than >one tape? I assume you have multiple dump images in the holding disk and it's the total size of them that's large, rather than a single huge image that won't fit on a tape, right? Run amflush once and let it process what it can. Then run it again. And again. Eventually it will get the disk cleaned out, one tape worth at a time. >Sandra John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: tapetype data for OnStream ADR50
>... So here are the data: Please go to www.sourceforge.net and post your results to the Amanda FAQ so others can find it in the future. >Olaf John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
unsubscribe
unsubscribe
Re: Backup up oracle database, part II
>... the copy command consumes a LOT of system resources (mainly >disk of course) and my server almost look dead from time to time. I've tried >nice -n 15 to make the cp command a bit more nice, but it doesn't seem to >work... The "nice" command only affects CPU priority, which obviously won't help your problem. >Any suggestions? This is a RedHat Linux 6.2 system. Buy more hardware :-). I don't know much about Linux, but here are some very general ideas: * Put the main disks and the temp disks on different controllers. * Upgrade the disks to faster versions. * Upgrade the controller(s) to faster versions. * Get a system with a faster main bus or multiple bus's. You should also examine the various system options (BIOS, etc) and/or talk with other Linux folks about whether there are any tunables in there, or if there are "good" and "bad" hardware combinations. You should also make sure there isn't anything silly going on, such as bad termination, bad cables, total cable length too long, non-FW device before FW device in the chain, etc. I doubt software will be able to fix this. You might do the copy yourself (as I recall, you were converting to Perl?) and add some (u)sleep time every few hundred I/O operations. Make the amount of time and number of operations tunables because you will for certain need to play with them a lot to get it balanced right. >/Fredrik Persson John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: missing result for...
>sendsize: reading /etc/amandates: Is a directory I swear I'm going to get rid of that damned thing :-). /etc/amandates is supposed to be a file, not a directory. Do this: # rm -fr /etc/amandates # touch /etc/amandates # chown /etc/amandates Just for curiosity, did you create (mkdir) /etc/amandates? If so, why? John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
unsubscribe
unsubscribe La informacion en este correo electrónico es confidencial y puede ser legalmente protegida. El contenido está dirigido solamente al destinatario. Cualquier acceso al mismo por otra persona, no está autorizado. Si Ud. no es el destinatario intencionado, cualquier divulgación, copia, distribución o cualquier otra acción tomada u omitida concerniente a la información contenida en el mismo, es prohibida y puede ser ilegal. Si Ud. ha recibido este correo por error, por favor contactar al remitente o elimine el mensaje. The information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. If you have received this mail in error, please contact the sender or delete the message.
Re: Filesystem offline in cluster ENV
>Here is the info from sendsize.debug ( client ) > >sendsize: getting size via dump for /u14 level 0 >sendsize: running "/opt/amanda/libexec/rundump (/usr/sbin/ufsdump) 0Ssf 1048576 - >/u14" >running /opt/amanda/libexec/killpgrp > DUMP: `/u14' is not on a locally mounted filesystem > DUMP: The ENTIRE dump is aborted. It's been a while since I've used 2.4.1p1, but I think the combination of "/u14" being passed to rundump and the error from dump itself both point to a system configuration problem, e.g. /etc/vfstab. Amanda takes the file system name and converts it to a /dev entry via the standard system calls (e.g. getmntent). It doesn't appear to have been able to do that. Take a look at the lines for some of your other file systems that do work and see if this isn't true. Assuming it is, you'll need to get that fixed before Amanda (actully, the dump program) can be called correctly. Or, if you cannot do this (because of whatever clustering means), you might change your disklist entry to use the /dev name instead of /u14. But that may still not let Amanda call the right program (vxdump). >Ruksana Siddiqui John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: uns*bscr*be
Can somebody please explain this recent rash of people posting uns*bscr*be messages to the list? What's the matter, don't people pay *any* attention to the instructions they get when they s*bscr*be? Oh, I'm being silly. In this day and age, who has time to actually read instructions? Far better to bug several hundred other people with your unsuccessful attempts to uns*bscr*be than to actually have to spend a few seconds thinking. -- Paul Tomblin <[EMAIL PROTECTED]>, not speaking for anybody Meeting, n.: An assembly of people coming together to decide what person or department not represented in the room must solve a problem.
odd 'selfcheck request timed out' problem
After installing and using Amanda without any problems not solvable by some RTFM'ing, I'm now stuck. My server is a recently upgraded Solaris box running 2.4.2. All my working clients are also Solaris but running the 2.4.1p1 version. The problem child is a new RH Linux 6.2 box that I just compiled and installed the same 2.4.2 source on. It appears to be making the connection but is not completing for some reason. The selfcheck.debug and amandad.debug from the client are included below. There is a firewall inbetween but it is currently set to allow all UDP traffic between the two hosts and it is not logging any rejected packets so I don't think it is the problem. Any clues would be appreciated. Thanks, Frank selfcheck.debug: /usr/local/libexec/selfcheck: version 2.4.2 checking disk /etc: device /etc: OK selfcheck: pid 26303 finish time Wed Nov 29 11:59:35 2000 amandad.debug: amandad: debug 1 pid 26302 ruid 999 euid 999 start time Wed Nov 29 11:59:35 2000 amandad: version 2.4.2 amandad: build: VERSION="Amanda-2.4.2" amandad:BUILT_DATE="Wed Nov 29 10:46:57 CST 2000" amandad:BUILT_MACH="Linux p4.hoovers.com 2.2.14-5.0 #1 Tue Mar 7 21:07:39 EST 2000 i686 unk nown" amandad:CC="gcc" amandad: paths: bindir="/usr/local/bin" sbindir="/usr/local/sbin" amandad:libexecdir="/usr/local/libexec" mandir="/usr/local/man" amandad:AMANDA_TMPDIR="/tmp/amanda" AMANDA_DBGDIR="/tmp/amanda" amandad:CONFIG_DIR="/usr/local/etc/amanda" DEV_PREFIX="/dev/" amandad:RDEV_PREFIX="/dev/" GNUTAR="/bin/gtar" amandad:COMPRESS_PATH="/bin/gzip" UNCOMPRESS_PATH="/bin/gzip" amandad:MAILER="/usr/bin/Mail" amandad:listed_incr_dir="/usr/local/var/amanda/gnutar-lists" amandad: defs: DEFAULT_SERVER="p4.hoovers.com" DEFAULT_CONFIG="normal" amandad:DEFAULT_TAPE_SERVER="clone" amandad:DEFAULT_TAPE_DEVICE="/dev/rmt/0bn" HAVE_MMAP HAVE_SYSVSHM amandad:LOCKING=POSIX_FCNTL SETPGRP_VOID DEBUG_CODE BSD_SECURITY amandad:USE_AMANDAHOSTS CLIENT_LOGIN="backup" FORCE_USERID HAVE_GZIP amandad:COMPRESS_SUFFIX=".gz" COMPRESS_FAST_OPT="--fast" amandad:COMPRESS_BEST_OPT="--best" UNCOMPRESS_OPT="-dc" got packet: Amanda 2.4 REQ HANDLE 007-0005C6E8 SEQ 975520772 SECURITY USER backup SERVICE selfcheck OPTIONS ; GNUTAR /etc 0 OPTIONS |;bsd-auth;index;exclude-list=/export/home/backup/exclude.gtar; sending ack: Amanda 2.4 ACK HANDLE 007-0005C6E8 SEQ 975520772 bsd security: remote host clone.hoovers.com user backup local user backup amandahosts security check passed amandad: running service "/usr/local/libexec/selfcheck" amandad: sending REP packet: Amanda 2.4 REP HANDLE 007-0005C6E8 SEQ 975520772 OPTIONS ; OK /etc OK /usr/local/libexec/runtar executable OK /bin/gtar executable OK /etc/amandates read/writable OK /usr/local/var/amanda/gnutar-lists/. read/writable OK /dev/null read/writable OK /tmp/amanda has more than 64 KB available. OK /tmp/amanda has more than 64 KB available. OK /etc has more than 64 KB available. amandad: got packet: Amanda 2.4 REQ HANDLE 007-0005C6E8 SEQ 975520772 SECURITY USER backup SERVICE selfcheck OPTIONS ; GNUTAR /etc 0 OPTIONS |;bsd-auth;index;exclude-list=/export/home/backup/exclude.gtar; amandad: It's not an ack amandad: sending REP packet: Amanda 2.4 REP HANDLE 007-0005C6E8 SEQ 975520772 OPTIONS ; OK /etc OK /usr/local/libexec/runtar executable OK /bin/gtar executable OK /etc/amandates read/writable OK /usr/local/var/amanda/gnutar-lists/. read/writable OK /dev/null read/writable OK /tmp/amanda has more than 64 KB available. OK /tmp/amanda has more than 64 KB available. OK /etc has more than 64 KB available. amandad: waiting for ack: timeout, retrying amandad: waiting for ack: timeout, retrying amandad: waiting for ack: timeout, retrying amandad: waiting for ack: timeout, retrying amandad: waiting for ack: timeout, giving up! amandad: pid 26302 finish time Wed Nov 29 12:00:35 2000 -- Frank Smith [EMAIL PROTECTED] Systems Administrator Voice: 512-374-4673 Hoover's Online Fax: 512-374-4501
No Subject
unsubscribe
Reset history/data for a particular host
Hi, One of my hosts, "merlin" is not being backed up constantly as amanda/planner complains about no estimate or historical data for a planned level 1 dump, when a level 0 dump was done just the other day... So, I suspect the database for that host is subtlely corrupted and I want to reset it somehow. Any pointers? Note this behaviour continues even after I force a full dump of all the directories for that host. It seems only certain directories are consistently failing... Edwin
Re: uns*bscr*be
On Wed, Nov 29, 2000 at 02:13:32PM -0500, Paul Tomblin wrote: > > Can somebody please explain this recent rash of people posting uns*bscr*be > messages to the list? What's the matter, don't people pay *any* attention > to the instructions they get when they s*bscr*be? Oh, I'm being silly. > In this day and age, who has time to actually read instructions? Far > better to bug several hundred other people with your unsuccessful attempts > to uns*bscr*be than to actually have to spend a few seconds thinking. > > -- > Paul Tomblin <[EMAIL PROTECTED]>, not speaking for anybody > Meeting, n.: > An assembly of people coming together to decide what person or > department not represented in the room must solve a problem. > The list server can no doubt be set up to divert such traffic. Unfortunately that increases the workload of the list maintainer. On the lists I maintain, there are unsubscribe instructions in the footer of each post. That doesn't entirely eliminate such requests being sent to the list, but at least it isn't an issue of users locating email that was (or was not) saved two computers ago. For example, Contributions/Posts To: [EMAIL PROTECTED] To Unsubscribe: [EMAIL PROTECTED], "unsubscribe" in message body Report Problems to: [EMAIL PROTECTED] List archive at: http://www.ssc.com/mailing-lists/ -- - Dan Wilder <[EMAIL PROTECTED]> Technical Manager & Correspondent SSC, Inc. P.O. Box 55549 Phone: 206-782-7733 x123 Seattle, WA 98155-0549 URLhttp://www.linuxjournal.com/ -
Re: uns*bscr*be
Quoting Dan Wilder ([EMAIL PROTECTED]): > On Wed, Nov 29, 2000 at 02:13:32PM -0500, Paul Tomblin wrote: > > Can somebody please explain this recent rash of people posting uns*bscr*be > > messages to the list? What's the matter, don't people pay *any* attention > > The list server can no doubt be set up to divert such traffic. > Unfortunately that increases the workload of the list maintainer. As a list maintainer myself, I'm well aware of that. That's why I obfuscated the word "s*bscr*be". > On the lists I maintain, there are unsubscribe instructions in the footer > of each post. That doesn't entirely eliminate such requests being Yes, I do that as well, ever since I switched to using mailman instead of majordomo. It seems to work fairly well, even with such perpetually computer-clueless people as pilots. -- Paul Tomblin <[EMAIL PROTECTED]>, not speaking for anybody God is real, unless declared as an integer.
Re: uns*bscr*be
So, if enough others on the list would favor the addition of uns*bscr*be instructions to the posts, and if somebody can contact the list maintainer and pursuade him or her to add such, we might see a decrease in such nuisance postings. On Wed, Nov 29, 2000 at 03:10:59PM -0500, Paul Tomblin wrote: > Quoting Dan Wilder ([EMAIL PROTECTED]): > > On Wed, Nov 29, 2000 at 02:13:32PM -0500, Paul Tomblin wrote: > > > Can somebody please explain this recent rash of people posting uns*bscr*be > > > messages to the list? What's the matter, don't people pay *any* attention > > > > The list server can no doubt be set up to divert such traffic. > > Unfortunately that increases the workload of the list maintainer. > > As a list maintainer myself, I'm well aware of that. That's why I > obfuscated the word "s*bscr*be". > > > On the lists I maintain, there are unsubscribe instructions in the footer > > of each post. That doesn't entirely eliminate such requests being > > Yes, I do that as well, ever since I switched to using mailman instead of > majordomo. It seems to work fairly well, even with such perpetually > computer-clueless people as pilots. > > -- > Paul Tomblin <[EMAIL PROTECTED]>, not speaking for anybody > God is real, unless declared as an integer. > -- - Dan Wilder <[EMAIL PROTECTED]> Technical Manager & Correspondent SSC, Inc. P.O. Box 55549 Phone: 206-782-7733 x123 Seattle, WA 98155-0549 URLhttp://www.linuxjournal.com/ -
Re: Reset history/data for a particular host
>One of my hosts, "merlin" is not being backed up constantly as >amanda/planner complains about no estimate or historical data for a >planned level 1 dump, when a level 0 dump was done just the other day... My guess is the estimates are taking longer than Amanda is willing to wait. Take a look at the first and last lines of sendsize*debug in /tmp/amanda on the client, calculate the amount of time it took, then look at the etimeout variable in the amanda(8) man page. Note that getting an incremental estimate almost always takes longer than getting an estimate for a full dump because of the extra decision making involved. >Edwin John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: uns*bscr*be
On Wed, 29 Nov 2000, Dan Wilder wrote: > The list server can no doubt be set up to divert such traffic. > Unfortunately that increases the workload of the list maintainer. Not necessarily. One can simply divert administrative requests to /dev/null. Or auto-respond with a canned message. -Mitch
Re: DUMPed /home trumps TARred /home/tim in index (fwd)
>... it was insisting setdisk be /home ... Huh? What was your current working directory when you started amrecover? Did you try "setdisk /home/tim" and if so and it failed, what did it say? >Do I need to wipe out my past flirtations with DUMP >from the index before I can get TAR to work? It's possible you'll need to wipe out the memory of /home, but DUMP vs tar has nothing to do with it. The index files are identical. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Backup up oracle database, part II
On Wed, 29 Nov 2000, Fredrik Persson P (QRA) wrote: > Hi all! > > Thanks to those of you who helped me getting a backup for an oracle database > working. > > Now, the solution to copy the db files to a temp storage and then have > amanda back up from there was good - it works! There is a problem however, Ouch. You'll need a lot of space which will be wasted while not holding the copied data files. I haven't seen part 1 but why not simply put all of the tablespaces into hot backup mode. Backup the data files and archivelogs normally and then backup the control files last or alternatively back the control files to trace? Just make sure you have plenty of archivelog space for transactions which hit the database during the backup. In addition you should do a database export; You're more likely to have a developer drop a table by mistake than have a disk fail and it's much easier to get data back from an export. Now, if you want some redundancy *and* faster backups then you could put the database on a raid 1 mirror. When it comes to do the backup, break the mirror and mount the broken mirror somewhere. Backup the broken mirror rather than the live database. Once the backup is done, unmount and resync the mirror. Make use of all the wasted temp space. I don't think the linux md driver is up to this but some of the newer stuff might be. > and that is that the copy command consumes a LOT of system resources (mainly > disk of course) and my server almost look dead from time to time. I've tried > nice -n 15 to make the cp command a bit more nice, but it doesn't seem to > work... > > Any suggestions? This is a RedHat Linux 6.2 system. > > Thanks! > > /Fredrik Persson > --
Re: Client install
>How is installing for the clients different than for the server? ... As others have mentioned, it's basically just like the server. If you want, you can disable building some parts, but I don't bother. >That is not >evident in anything I've read (README, INSTALL and the entire chapter online >at www.amanda.org). Do I need to do the whole ./configure, make, make >install process for each client? IT seems that's massive overkill. Yes, it would be overkill, but I doubt that's what most people do. This is more of a general Unix admin question than an Amanda issue, which is why it's not in the docs (we don't cover how to type commands to the shell or use an editor, either :-). In my case, I build one copy for each unique type of host (usually based on hardware and OS), then rdist the result around. To make my life easier, I install Amanda into its own little area (e.g. /opt/amanda-2.4.2), but that's just a detail. >Randy Cordell John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: Backup up oracle database, part II
>Just make sure you have plenty of archivelog space for transactions which >hit the database during the backup. As I understand it, this was a major reason we don't do this but use the "big backup area" approach (which I proposed in "part I"). I'm not an Oracle type, but the guy who put this together here is very good. And we've "tested" (well, OK, the thing crashed and we had to recover :-) the result -- several times. >Now, if you want some redundancy *and* faster backups then you could put >the database on a raid 1 mirror. When it comes to do the backup, break the >mirror and mount the broken mirror somewhere. ... How does the guarantee the disks in the mirror are up to date, i.e. logically consistent from Oracle's point of view? This technique comes up once in a while for normal backups, too, and it's never made any sense to me. It won't work any better than what dump (or tar, in some cases) does now. John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]