Re: gnu tar-1.15+ for RHEL4

2006-04-07 Thread Jay Fenlason
On Fri, Apr 07, 2006 at 12:08:04PM -0400, Christopher Linn wrote:
> i am installaing amanda on a RHEL4 workstation.  when using 
> amanda-backup_client-2.5.0-1.rhel4.i386.rpm from the zmanda site,
> it fails with:
> 
> error: Failed dependencies:
> tar >= 1.15 is needed by amanda-backup_client-2.5.0-1.rhel4.i386
> 
> yet when i have searched for a tar >= 1.15 rpm i have only found rpms 
> for RHEL4 going up through tar-1.14-9.RHEL4.src.rpm (this on rpmfind).
> >= 1.15 rpms are only available for SuSE, Mandriva, Fedora and Mandrake.
> 
> is it possible to use an rpm from one of these other distros?  what do 
> other folks do about this?

You could always grab the tar .src.rpm from Rawhide or Fedora Core 5
and rebuild it on your RHEL-4 box.  Or you could grab the amanda
.src.rpm, rewrite the tar dependency, and rebuild it.  That's probably
what I would do.

It sounds to me like a packaging error that the rpm doesn't work with
a version of tar that comes with the OS and is known to have been
patched to work with Amanda.  But what do I know about packaging? :-)

-- JF


Re: /etc/dumpdates

2005-12-20 Thread Jay Fenlason
On Mon, Dec 19, 2005 at 07:24:33PM -0500, Matt Hyclak wrote:
> On Mon, Dec 19, 2005 at 06:47:15PM -0500, Paul Seniuk enlightened us:
> > Matt,
> > 
> > Well you were right and that worked. Annoying story to it ...collegue
> > decided to upgrade the box to FC4 and not tell me.
> > The upgrade turned SELinux on by default.
> > 
> > Merry Xmas Matt and thanks :)
> > 
> > 
> 
> Sounds like there needs to be some work done on the selinux definitions. 
> 
> /me pokes Jay

/me points Matt at Dan Walsh et all. :-)

I'm running my Amanda server on a Rawhide box, and I don't see many
AVC denied messages in my logs, so maybe they've improved the policy
for FC5.

-- JF


Re: Changing User when using RPMs

2005-11-28 Thread Jay Fenlason
On Sun, Nov 27, 2005 at 08:48:09PM -0700, Broderick Wood wrote:
> Is there anyone out there who is using the RedHat RPMs abut has to change 
> the user that runs the backups?
> 
> ie.  I want to change the name from "amanda" to "backup" or some other ID 
> but runtar seems to have the id "amanda" hard coded into it.

Yep.  It's a royal pain.  I've had
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=124510
open since the dawn of time (or so it seems) because it's so hard to
fix.  Here's what you have to do.

1) Remember that Red Hat cannot support hand-built rpms.  If you're
paying for support, you should contact your support person instead of
following these direction.

2) Download and install the appropriate amanda-{version}-{release}.src.rpm
for your release of Red Hat Linux/Red Hat Enterprise Linux/Fedora Core.

3) Edit /usr/src/redhat/SPECS/amanda.rpm.  Change every occurance of
"amanda" as a username to the new username.  These will be the useradd
line, and the %attr lines.  Also change the release number by adding
your name, or initials or something to it.

4) Build new Amanda rpms with "rpmbuild -ba amanda.spec" in the
/usr/src/redhat/SPECS directory.

5) Back up your current Amanda files.

6) Remove the old Amanda rpms.

7) Make sure the "amanda:" entry is gone from /etc/passwd.  Remove it
by hand (with userdel) if you have to.

8) Install the new rpms you just built.  They'll be in
/usr/src/redhat/RPMS/{your arch}/

If you can find a way to have rpm automatically change the username on
upgrade, let me know so I can make that change to the rawhide Amanda
rpms, and we can save every other Red Hat / Fedora user this pain.

-- JF


Re: Why Oh Why only THIS DLE is giving me those timeout problems ?

2005-08-30 Thread Jay Fenlason
On Tue, Aug 30, 2005 at 01:15:48PM -0400, Guy Dallaire wrote:
> 2005/8/30, Graeme Humphries <[EMAIL PROTECTED]>:
> > Guy Dallaire wrote:
> > 
> > >>Perhaps the problem DLE has lots of hard links?
> > >>
> > >>
> > >Hard Links 
> > >
> > >
> > As opposed to symbolic links. You'll probably want to read up at
> > Wikipedia on the two if you haven't heard about them before:
> > 
> > http://en.wikipedia.org/wiki/Hard_link
> > http://en.wikipedia.org/wiki/Symbolic_link
> > 
> > Hard linking is fairly depricated these days AFAIK, but it's still used
> > in certain circumstances.
> > 
> 
> Yes, thanks. I know about hard links. But how would it impact the size
> or performance of my backups ? And is there a way to find the number
> of HARD links ?

Each time gnutar sees a file with a linkcount > 1, it allocates a
structure to keep track of it, so the file can be correctly relinked
on restore.  If your filesystem has enough of them, you can
run gnutar out of memory.

-- JF


Re: AMANDA+Samba setup...

2005-06-24 Thread Jay Fenlason
On Fri, Jun 24, 2005 at 10:04:49AM -0400, Matt Hyclak wrote:
> On Fri, Jun 24, 2005 at 03:51:13PM +0200, Paul Bijnens enlightened us:
> > What is also strange is your statement just before the above
> > paragraph:
> > 
> > >When I take a look at the Amanda logs in /var here is what I see:
> > 
> > because the logs that contain the above lines is in /tmp/amanda/
> > unless you make special arrangements during compilation of the
> > client programs.
> > 
> 
> At some point, I know the redhat RPMS moved that to /var/log/amanda. 

Yup.  Our security team doesn't like important logs left in /tmp.
Come to think of it, I don't either.

-- JF


Re: holding disk with vtapes?

2005-05-18 Thread Jay Fenlason
On Wed, May 18, 2005 at 09:47:29PM +0200, Paul Bijnens wrote:
> Eric Dantan Rzewnicki wrote:
> >Is it normal for some dumps to go directly to tape, skipping the holding
> >disk?
> >
> >I see that happening sometimes, but wonder if it means I have something
> >misconfigured.
> 
> 
> Those DLE's with an estimated size larger than the configured value
> for holdingdisk will bypass holdingdisk an dump directly to tape.

Also, there's a bug in current versions of gnu tar that cause it to
produce extremely large estimates if the filesystem contains sparse
files.  On Fedora Core x86_64 machines, attempting to back up /var can
be a challenge because tar thinks /var/log/lastlog is a terabyte or
two in size and produces estimates that are somewhat larger than my
50G tape.  See the commentery in
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=154882

The usual symptom of this bug is Amanda refusing to dump the DLE at
all, but I can imagine it forcing a dump to go direct-to-tape instead
of to the holding disk under some circumstances.

> I noticed that the example amanda.conf file has specified "use 290 m"
> for the holdingdisk section.
> Many people seem to forget to adapt that arbitrary number there.

Maybe we should change it to "use 1t"?  That'll get people to notice
it, won't it? :-)

-- JF


Re: Append Files using tar

2005-04-12 Thread Jay Fenlason
On Tue, Apr 12, 2005 at 04:15:51PM +0200, Paul Bijnens wrote:

> Most modern tapedevices are not random access.
> I don't know how to "rewind a little", and neither does the gnutar
> programmer which states in the beginning of the "update.c" source
> file: "... if [the files] are on raw tape or something like that,
> it will probably lose... */ ".

That was probably me, lo those many years ago.

> I guess that if you "dd" the first file, you find a nice tar file,
> and if you "dd" the next file, which seems to be written to
> tape too, you'll find another file beginning with the last block
> again, now having the trailer replaced with the next archive members.
> Useless actually.

The fact that tar didn't give you an error is clearly a bug.  You
should report it.  Tar should have noticed that it's attempt to
backspace the archive failed.

It is theoretically possible to make this work on a theoretical tape
drive, if the drive in question supports the MTBSR command in the
MTIOCTOP ioctl.  Back when I was working on Gnu Tar, I considered
attempting to use said command to backspace a tape drive when
attempting to append to an archive on a tape.  Unfortunatly, if I
wrote such a thing, someone would eventually attempt to use it on
hardware that was incapable of doing what they wanted, and would
become very irate when their real-world tape drive behaved like a
mechanical device and destroyed their data.

Back when I was working on Gnu Tar, I had access to two different
kinds of tape drives: 9-track reel-to-reel tapes, and cartridge
"streaming" tapes.  The 9-track drives a capable of backspacing a
single record (aka "tape blocks"), but reading a tape block,
backspacing, and attempting to write over the tape block will cause
the new tape block to be written to a slightly different physical
location on the tape.  Eventually (how soon depends on the inaccuracy
of the paticular tape drive) the new tape block will be written on top
of the previous tape block, destroying the data on that section of the
tape.

streaming tape drives are (were) incapable of backspacing a single
record, so there is simply no way to append to a tape file.

Tape drives that use timing tracks for aligning data on the tape may
be capable of backspacing a tape block and rewriting it safely, but I
didn't have any such devices, so they were strictly theoretical.

In any case, tar has no way to tell what kind of tape drive the
archive is on, and relying on the user to know the capabilities of
their tape drive is foolish.  So tar doesn't even try.

Here's how to append files to a tape archive:

1: free up a chunk of disk space on the machine with the tape drive
   that is at least as large as the capacity of the tape.
2: wind the tape to the start of the archive you wish to append to ("mt
rewind" or "mt fsf N" as appropriate)
3: use dd to copy the tape archive into the free disk space ("dd
   bs={size of tape blocks} < /dev/ntape > /free/space/path/tmp.tar")
4: append the files you want to the temporary copy of the tape archive
   ("tar rvbf {size of tape blocks} /free/space/path/tmp.tar ...")
5: backspace the tape to the start of the archive you want to append
   to ("mt bsf 1" or "mt rewind", etc.)
5: Copy the new archive back to the tape with dd
   ("dd bs={size of tape blocks} < /free/space/path/tmp.tar >
   /dev/ntape" )

Note that some tape drives will only allow you to either append to the
tape or start writing at the beginning, in which case you will have to
copy off any tape archives before the one you wish to append to, and
rewrite the tape from the beginning.

-- JF


Re: Amanda Client

2004-09-28 Thread Jay Fenlason
On Tue, Sep 28, 2004 at 02:29:16PM -0500, Jason Miller wrote:
> Also I do see with a netstat -an the following line
> udp4   0  0  *.10080*.*
> 
> After installing the xinetd I got that far, client still times out. I am
> running the amcheck -c DailySet1 from the server to test this BTW. No
> firewall is on this box or on the server and its on the same network through
> a switch so nothing should be blocking the traffic. It just appears the
> amandad client is not running on the client server, if I do ps -aux I do not
> see the amandad in the list.

Amandad won't be running until xinetd starts it.  Xinetd won't start
it until it sees incoming packets for it.  That's xinetd's job :-)

It might be time to run ethereal on both the client machine and tha
Amanda server.  Make sure the server is really sending the Amanda
requests to the correct IP address.  Then make sure the client machine
is receiving the requests.  There may be an egress filter on the
Amanda server that's blocking the outgoing packets, or the server may
have the wrong IP address for the client.

-- JF


Re: 2.6.6-rc2 and newer cause trouble with amanda

2004-06-16 Thread Jay Fenlason
I've experienced similar problems backing up a Windows 2000 client
(with the Cygwin Amanda client) from a Red Hat Enterprise Linux 2.1 AS
(Itanium) box.  The errors only happen when the Windows box is on the
far side of a Fedora Core 2 bridge/gateway.  When the Windows box is
directly attached to the same network as the Amanda server, it works
fine.  Oddly enough, the Fedora Core 2 box that I back up in the same
configuration also works fine.

Fortunatly, I have a plentiful supply of kernel hackers here, so if
I get good tcpdump output(s) from tonight's run, maybe I can find out
what's happening.

-- JF


Re: moved to new disk, now amanda wants to do level 0's on whole system

2003-11-14 Thread Jay Fenlason
On Fri, Nov 14, 2003 at 01:23:12AM -0500, Gene Heskett wrote:
> Greetings all;
> 
> See subject, Which of course is leading to a 90% failure rate as the 
> whole system has around 40Gb, but the tapes are only 4Gb's.
> 
> What happened is that I put in a new 120 Gb drive, 2x the size of the 
> one I took out, mainly because the root partition was full, and no 
> room to readjust things was available.
> 
> Although the /dev/whatevers have changed, the mountpoints have not.  I 
> used cp to copy some of the data, and fr to do some, seems cp cannot 
> see a .file!
> 
> I missed one run while the disk was being configured.  One person said 
> he had never swapped disks without doing a re-install, but I just 
> did, and everything seems to be working just fine.  Its tedious for 
> sure, but it can be done.
> 
> I have expanded the dumpcycle and runspercycle from 8 to 10 because it 
> seemed amanda was having a hard time hitting its best balance point.  
> tapecycle is still 28, but I can add more.
> 
> So the disklist is unchanged.  Why does amanda want to do a level 0 on 
> the whole system?

When you copied the files, the inode numbers for each file changed.
When gtar sees the inode number of a file change, it assumes the
contents of the file have changed too (it doesn't store md5sums of
file data or anything clever like that).  Amanda sees that an
incremental is the same size as a level 0, so it tries to do a level
0.

Also, cp/fr may not have correctly reset the modification times of the
files when it copied them.  Oh, and they may not handle links well
either.  To copy directory trees, I usually use "( cd /fromdir ; tar
cf - . ) | ( cd /todir ; tar xpf -)", which preserves modification
times, and permissions.

Hmm.  How clever do you feel like being?  If you can somehow get a
list of the files which have actually changed, you could edit the last
listed-incremental data file and update the inode numbers of all the
files you don't want to re-dump.  It'd probably be an amusing perl
script. . .

-- JF


Re: odd problem setting up amanda on Linux

2003-09-11 Thread Jay Fenlason
On Thu, Sep 11, 2003 at 09:59:20AM -0700, Harlan Harris wrote:
> Hi,

> I'm just starting to set up Amanda, and am having trouble. I've got
> two Red Hat 9 Linux boxes, one as the server, and one as a client. The
> server backs itself up fine. But the client is timing out. I've
> determined that it's connecting, but the server isn't properly ack'ing
> the client! I attach a log file. I'm using the RPM of Amanda, and
> didn't compile it myself (I can if necessary). Version is 2.4.3.

> Couldn't figure out how to even search for this problem on the
> archives of this list! It's hard to describe! Any suggestions?

The obvious thing to check on a Red Hat Linux box is that the
firewalls are not blocking the packets.  If either machine has it's
firewall enabled, it'll result in this kind of wierd timeouts.

-- JF


Re: problem with dumper in amanda 2.4.4

2003-09-04 Thread Jay Fenlason
On Thu, Sep 04, 2003 at 04:57:52PM -0400, Joshua Baker-LePain wrote:
> On Thu, 4 Sep 2003 at 4:53pm, Ashwin Bijur wrote
> 
> > When I run amanda backups, I am getting the following message in 
> > /var/log/messages:
> > 
> > kernel: application bug: dumper(23061) has SIGCHLD set to SIG_IGN but 
> > calls wait()
> > 
> > The backups are taking an unusually long time.  Does this have to do 
> > anything with the above message?
> > 
> > I am running amanda 2.4.4 on Redhat 9.0.
> 
> I'm pretty sure this has to do with the NPTL stuff in the RH9 kernel.  I'm 
> no programmer, though, so that's all I can offer...

Bingo!  If you ignore SIGCHLD, wait()'s behavior becomes undefined.
Pre Red Hat Linux 9, wait() mostly (well, except for a few race
conditions) still worked when you ignored SIGCHLD.  With the advent of
NPTL, the behavior of this undefined combination changed, so someone
modified glibc (I think) to flag programs that do that so they can be
fixed.

I'm attaching the patch.

So why're you using 2.4.4 and not 2.4.4p1?  If you're going to the
trouble of compiling something yourself, why not use the latest
version?

-- JF
--- amanda-2.4.3/server-src/dumper.c.orig   2003-02-11 21:10:03.0 -0500
+++ amanda-2.4.3/server-src/dumper.c2003-02-11 21:10:27.0 -0500
@@ -254,7 +254,6 @@
error("can't get login name for my uid %ld", (long)getuid());
 
 signal(SIGPIPE, SIG_IGN);
-signal(SIGCHLD, SIG_IGN);
 
 interactive = isatty(0);
 


Re: maybe this is a dumb question

2003-08-26 Thread Jay Fenlason
On Tue, Aug 26, 2003 at 10:34:49AM -0500, Chris Barnes wrote:
> One of my student workers - who happens to be setting up Amanda,
> recently came to me with a concern about how the backup/restore process
> handles soft links.   I suspect that this is a non-issue in that Amanda
> has already figured out a way to deal with this, but just in case...
> 
> Let's say a user creates a soft link in their home directory that points
> to
> /usr/bin, eg:
> 
> lrwxrwxrwx  1 cbarnes  barnes   15 July  1 13:35 mybin -> /usr/bin/
> 
> Then the backups of the home are run.
> 
> Then the user removes the softlink and creates a real directory with
> that same name.
> 
> drwxr-xr-x  2 cbarnes  barnes 4096 Aug 18 17:23 mybin
> 
> and then puts a modified program into that directory:
> 
> drwxr-xr-x2 cbarnes  barnes   4096 Aug 18 17:23 ./
> drwxr-xr-x   13 cbarnes  cbarnes  4096 Aug 25 17:31 ../
> -r-s--x--x1 cbarnes  barnes   7667 Aug 18 17:26 passwd*
> 
> and backups are run again.
> 
> 
> The concern is that when a restore is run, the softlink to the /usr/bin
> directory will be recreated, then the file will be restored into that
> directory, overwriting the file that is supposed to be there (ie.
> creating a security issue).
> 
> 
> 1) Is this possible, or does Amanada already do something to prevent
> this?
> 2) If it is possbile, are there any security considerations we need to
> take into consideration when running backups or restore jobs?

Amanda doesn't do anything about this--it just calls the underlying
backup mechanism (guntar or dump) to do the dirty work.  It's up to
the underlying backup mechanism to handle this.  So the right people
to be asking a question like this are the gnutar maintainers or the
dump maintainers.

It's been too long since I wrote gnutar for me to remember how it
handles cases like this.  You should ask a more current maintainer.

A similar attack would be to have a directory "mybin" containing a file
"passwd" before a dump is done.  Then replace "mybin" with a symbolic
link to "/bin" and request a restore of "mybin/passwd".

I'll check out both of these scenerios and report back on what I find.

-- JF


Re: Configuration Guide.

2003-08-12 Thread Jay Fenlason
On Wed, Aug 06, 2003 at 11:43:02AM +0300, rehanann wrote:
> Dear All,
>  I need configuration document for Amanda 2.4.4 p1 if its
> available please forward me and I am installing this in Red Hat 8.0.
> 
> "Without any sense you cannot make any sensible things for
> example if you have ingredients and you don't know recipe you can never
> succeed"

Have you read all the files in /usr/share/doc/amanda-server*/

Have you read the manual pages for the various Amanda commands?

Have you googled for appropriate HOWTOs?

Have you read the Amanda web site?

Have you read the chapter on Amanda in "Unix Backup and Recovery" by
W. Curtis Preston?

Aside from all that, there really isn't any Amanda configuration
documentation.  If you want to change that, well, you can write some.
Once it's done, you can post it here so it can be included in future
versions of Amanda.  As they say on another mailing list: "Patches
Thoughtfully Considered".

-- JF


Re: chg-manual bug?

2003-07-28 Thread Jay Fenlason
On Tue, Jul 29, 2003 at 01:32:19AM +0800, Matthias Bethke wrote:
> Hi Gene,
> on Monday, 2003-07-28 at 07:25:08, you wrote:
> > >| EGREP=grep -E
> > >
> > >certainly looked wrong to me. Quoting did it:
> > >| EGREP='grep -E'
> > 
> > I don't use this but that is how it is in my copy of 
> > amanda-2.4.4p1-20030716, at line 33, with the quotes.
> 
> Must have been fixed meanwhile -- my install directory is from June
> 26th, I guess I downloaded the source one or two days before.
> 
> > > length 3848 mbytes
> > > [...]
> > 
> > This looks as if the drives compression is on.  If it is, then the 
> > data from /dev/urandom gets somewhat expanded by it, and will report 
> > a bit less than the tapes raw capacity.
> 
> Hmm...I was a bit surprised about this value already, but according to
> both the DIP switch and mt, compression is off.
> 
> > Since the gzip that amanda uses can beat the hardware in every dept
> > but speed, we generally recommend that the drives hardware compressor
> > be disabled.
> 
> Can amanda be told to use bzip2? This P3 is far from fully loaded with
> its job as a WLAN router and fileserver, and as it cannot be clocked
> down to save some energy, it might as well do something useful for the
> calories it burns :)

I've been looking at patching amanda to do bzip2 (actually, any
arbitrary external compression program), but I'm not done
yet. :-( A patch to just replace gzip with bzip2 is easy, but loses
the ability to read old dumps.

I don't know if anyone else has done any work in that direction.
Maybe these posts to the list will jog someone's memory.

-- JF


Re: xinetd shutting off amanda

2003-07-14 Thread Jay Fenlason
On Mon, Jul 14, 2003 at 12:29:03PM -0400, Chris Dahn wrote:
> Hi all,
>   I used to have 2.4.2p2 installed. I built an ebuild for Gentoo for it. 
> Apparently they finally accepted someone's ebuild for Amanda into the CVS 
> tree, and so my system was automatically updated to 2.4.4 last night. 
> However, this messed everything up. I've managed to fix (I think) all of the 
> user/group permission problems, but I'm having problems on one of my servers.
> 
>   Specifically, I start up xinetd, and when I do an amcheck, the remote xinetd 
> spits out:
> 
> "Deactivating service amanda due to excessive incoming connections.  
> Restarting in 30 seconds."
> 
>   It will faithfully continue to do this forever. My other host (the one the 
> client sits on) doesn't have this problem. Any ideas?

Check that the files reference in /etc/xinetd.d/a* actually exist.  If
they say "server = /usr/lib/amanda/amandad" (etc) and the file is now
installed in /usr/libexec/amanda/amandad (or whatever) you'll get this
failure.

-- JF


Re: Installation

2003-06-24 Thread Jay Fenlason
On Tue, Jun 24, 2003 at 10:36:24AM +0200, Mikkel Gadegaard wrote:
> Hey people
> 
> I have a problem with installing amanda. Until recently I had Amanda running
> on a RedHat 8.0 (amanda installed whith RedHat), that machine wasn't big
> enough so I got a bigger machine and installed RedHat 9.0 on it. This time
> without including Amanda because I wanted to try out compiling and
> installing on my own :-)
> 
> When I do the 3 steps ./configure. make and make install no errors are
> reported but a lot of directories are missing when it is done. The things
> installed is /usr/local/sbin/* /usr/local/lib/* /usr/local/libexec/*
> /usr/local/man/* and thats that. No configdir none of the files under
> /var/ (which I had in the default installation on the old machine.
> 
> My question is: Do I have to create all these files and directories myself
> or have I done something wrong?
> 
> I've used several different options with ./configure and I have wiped the
> config between each new attempt. Latest attempt I used
> 
> ./configure --with-user=backup --with-group=disk --with-config=BackUp
> (trying to let amanda choose everything on it's own.
> 
> Hope someone can help me out ;-)

You can always install the Red Hat supplied Amanda source RPM, and
look at the .spec file I used to build the binary RPMs.  It contains
all the arguments passed to configure, among other useful information.
I don't remember if "make install" is supposed to install the
configdir and related files or whether the amanda.spec file does it by
hand.  The spec file remembers so I don't have to. :-)

As I vaguely recall, /usr/local is intended for locally compiled
programs, which is why most configure scripts use it as the default
installation location.  However, packages shipped as part of the
operating system (eg anything Red Hat ships), should never go in that
directory.

Certainly, when it comes upgrade time, being able to find all the
packages you built by doing an ls on /usr/local/bin makes the upgrade
go a lot smoother.

-- JF