Re: Linux DUMP

2002-05-28 Thread Christoph Scheeder

Please not again this discussion...
it has been discussed in depth (if i remember right) last fall,
so please have a lock at the archives of that time.
Christoph

Uncle George wrote:

> Ya, but didnt someone post that "DUMP" on linux can fail - if the
> conditions are right? I think is was suggested that SMP systems can
> demonstrate the failure sooner. 
> I think that Mr. Torvolds ( sorry is i mis-spelled) made that comment or
> conclusion. 
> Are there some caveats that need to be added here ?
> /gat
> 
> "Bernhard R. Erdmann" wrote:
> 
>>>Which backup program is best? dump, says some people. Elizabeth D. Zwicky
>>>torture tested lots of backup programs. The clear choice for preserving
>>>all your data and all the peculiarities of Unix filesystems is dump, she
>>>stated. Elizabeth created filesystems containing a large variety of
>>>unusual conditions (and some not so unusual ones) and tested each program
>>>by do a backup and restore of that filesystems. The peculiarities
>>>included: files with holes, files with holes and a block of nulls, files
>>>with funny characters in their names, unreadable and unwriteable files,
>>>devices, files that change size during the backup, files that are
>>>created/deleted during the backup and more. She presented the results at
>>>LISA V in Oct. 1991.
>>>
>>This article is archived here:
>>http://berdmann.dyndns.org/doc/dump/zwicky/testdump.doc.html
>>
> 





Re: Bad magic number in super-block

2002-05-28 Thread Christoph Scheeder

Hi,
As i read the original message of Andre i would say he isn't
using tar for his backup, and his linux-dump does not like
the ext3-fs.
For using tar you have to put the line

programm "GNUTAR"

in your dumptype.
If this line is missing amanda defaults to dump to do the
backups.
And AFAIK there is no dump for ext3 at this time.
Christoph

Chris Marble wrote:

> Andre Gauthier wrote:
> 
>>I have recently installed Amanda on Red Hat 7.2 with ext3 fS. I am using
>>tar-1.13.25-4. I complied with group=backup. Ran amcheck and it did not
>>report any errors, but when I ran amdump it did not create an index or a
>>backup. In the debug logs I got permission denied while opening
>>filesystem. I recompiled with user=disk, and I got the same error
>>message. Then chmod o+r on the /dev/sdg1 the filesystem in question as
>>an experiment, it no longer had permission denied   
>>but rather the error message /dev/sdg1: Bad magic number in super-block
>>while opening filesystem.  The filesystem is on the local host.
>>
> 
> Are you sure you have ext3-aware versions of all the ufs tools?
> Can you do a simple tar of the filesystem to tape (without amanda)?
> 





help! taper stop responding

2002-05-28 Thread Wong Ching Kuen Frederick

dear all,

i install amanda 2.4.3p3(both rpm and source have been tried) in a redhat
7.3 system. i have used amanda in other systems without and problem for 3
years. however, amanda seems do not work probably this time. amanda is able
to store all the data in the temp. disk space. however, when it try to move
the data to tape, it stops responding. i try to backup the /etc with around
2M of data, it keeps showing writing to tape without any progress. so what
is the possible cause for this? thanks you for your attention.

regards,
fred




Slow dumper

2002-05-28 Thread Bartho Saaiman

I have a problem where my dumper is slow and the taper seems to be faster:


STATISTICS:
   Total   Full  Daily
       
Estimate Time (hrs:min)0:02
Run Time (hrs:min)11:21
Dump Time (hrs:min)9:42   9:42   0:00
Output Size (meg)   14818.314818.30.0
Original Size (meg) 20804.720804.70.0
Avg Compressed Size (%)71.2   71.2--
Filesystems Dumped1  1  0
Avg Dump Rate (k/s)   434.8  434.8--

Tape Time (hrs:min)1:37   1:37   0:00
Tape Size (meg) 14818.414818.40.0
Tape Used (%)  74.1   74.10.0
Filesystems Taped 1  1  0
Avg Tp Write Rate (k/s)  2606.1 2606.1--

Is there a way of changing this to speedup backups.

-- 
|-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-|
Bartho Saaiman
Cellphone * Work   +27 72 341 8626 * +27 21 808 2497 ext 204
Stellenbosch Automotive Engineering http://www.cae.co.za
|-=-=-=-=-=-=-=< registered linux user 236001 >-=-=-=-=-=-=-=-=-|




Re: Linux DUMP

2002-05-28 Thread Uncle George

does this mean that there was a definitive conclusion? 

Christoph Scheeder wrote:
> 
> Please not again this discussion...



Re: Quicky

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 5:01pm, Robert Kearey wrote

> If I change tapetypes (ie, from a 30G config to a 20G one), can I expect 
> Catastrophy to occur?
> 
No.  You may get some delayed level 0s, but it will work itself out.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Linux DUMP

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 6:34am, Uncle George wrote

> does this mean that there was a definitive conclusion? 

Yup -- use what you are comfortable with and what your testing proves 
works.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Slow dumper

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 11:09am, Bartho Saaiman wrote

> I have a problem where my dumper is slow and the taper seems to be faster:
> 
> 
> STATISTICS:
>Total   Full  Daily
>        
> Estimate Time (hrs:min)0:02
> Run Time (hrs:min)11:21
> Dump Time (hrs:min)9:42   9:42   0:00
> Output Size (meg)   14818.314818.30.0
> Original Size (meg) 20804.720804.70.0
> Avg Compressed Size (%)71.2   71.2--
> Filesystems Dumped1  1  0
> Avg Dump Rate (k/s)   434.8  434.8--
> 
> Tape Time (hrs:min)1:37   1:37   0:00
> Tape Size (meg) 14818.414818.40.0
> Tape Used (%)  74.1   74.10.0
> Filesystems Taped 1  1  0
> Avg Tp Write Rate (k/s)  2606.1 2606.1--
> 
> Is there a way of changing this to speedup backups.

The first thing to do, if at all possible, is to use a holding disk.  
Given that your run time=dump time+tape time, it appears you aren't using 
one.  A holding disk big enough for your two largest partitions can 
*significantly* speed things up.

Next you need to look at your clients, your network, etc.  E.g. you're 
using software compression -- are you doing it on the clients?  How fast 
are they?  Is your network throughput with other apps good?

This isn't an amanda problem per se (except for the holding disk), but one 
of looking at your resources.


-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Running out of tape due to full dumps

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 4:29pm, Bradley Marshall wrote

> I've got amanda running successfully here for some time, but
> periodically I have a problem where the tape runs out of space.
> This generally happens when 2 of the larger partitions I have have
> a full dump run on them at the same time.  Is there any way to tell
> Amanda to try not to do a full dump of certain file systems at the
> same time?

Why not define a slightly smaller tapelength?  That way amanda won't try 
to put too much on the tape.

Are you using hardware compression?  You may be assuming too much 
compression.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Slow dumper

2002-05-28 Thread Bartho Saaiman

Hi Joshua

I am using a dump disk and these backups are all local. I am using the 
following compressions to ensure that the data actually fits on a disk.

define dumptype tgz-best {
 program "GNUTAR"
 options compress-best, index
 priority high
 dumpcycle 0
}

define dumptype tgz-fast {
 program "GNUTAR"
 options compress-fast, index
 priority high
 dumpcycle 0
}

define dumptype tar {
 program "GNUTAR"
 options no-compress, index
 priority medium
 dumpcycle 0
}


Joshua Baker-LePain wrote:
> 
> The first thing to do, if at all possible, is to use a holding disk.  
> Given that your run time=dump time+tape time, it appears you aren't using 
> one.  A holding disk big enough for your two largest partitions can 
> *significantly* speed things up.
> 
> Next you need to look at your clients, your network, etc.  E.g. you're 
> using software compression -- are you doing it on the clients?  How fast 
> are they?  Is your network throughput with other apps good?
> 
> This isn't an amanda problem per se (except for the holding disk), but one 
> of looking at your resources.
> 
> 


-- 
|-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-|
Bartho Saaiman
Cellphone * Work   +27 72 341 8626 * +27 21 808 2497 ext 204
Stellenbosch Automotive Engineering http://www.cae.co.za
|-=-=-=-=-=-=-=< registered linux user 236001 >-=-=-=-=-=-=-=-=-|




Re: Slow dumper

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 2:47pm, Bartho Saaiman wrote

> I am using a dump disk and these backups are all local. I am using the 
> following compressions to ensure that the data actually fits on a disk.
> 
What's your hardware?  What OS?

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Linux DUMP

2002-05-28 Thread Uncle George

Sorry, thats a general conclusion to most things in life. 

Is there a situation(s) where DUMP can fail. If yes, why are there no
warning labels ( ie the probability of failure is 1 in 1billion ). If
NO, than can I see the proof that absolutely refutes Mr. Torvolds
statement.
/gat

Its interesting that I was unaware of this dilema ( the possible failure
of DUMP ) until it was posted on this list. Maybe others, as they post
DUMP v. TAR inquiries should also be made aware of this possible
scenario.
I'm also reasonably quite sure that most parents would not contemplate
placing small children ( as well as small adults ) in front seats with
air-bags - now-adays, even though testing proved that air-bags are safe,
and a proven safety feature. 
 
Joshua Baker-LePain wrote:
> > does this mean that there was a definitive conclusion?
> 
> Yup -- use what you are comfortable with and what your testing proves
> works.



Re: Slow dumper

2002-05-28 Thread Bartho Saaiman

I am using:

[bartho@caepdc amanda]$ uname -a
Linux caepdc.cae.sun.ac.za 2.4.18-6mdk #1 Fri Mar 15 02:59:08 CET 2002 
i686 unknown

[bartho@caepdc amanda]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: IBM  Model: DDYS-T36950M Rev: S96H
   Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
   Vendor: IBM  Model: DDYS-T36950M Rev: S96H
   Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 02 Lun: 00
   Vendor: IBM  Model: DDYS-T36950M Rev: S96H
   Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 03 Lun: 00
   Vendor: IBM  Model: DDYS-T36950M Rev: S96H
   Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 04 Lun: 00
   Vendor: IBM  Model: DDYS-T36950M Rev: S96H
   Type:   Direct-AccessANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 05 Lun: 00
   Vendor: HP   Model: C5713A   Rev: H910
   Type:   Sequential-AccessANSI SCSI revision: 02

[bartho@caepdc amanda]$ cat /proc/scsi/aic7xxx/0
Adaptec AIC7xxx driver version: 6.2.4
aic7892: Ultra160 Wide Channel A, SCSI Id=7, 32/253 SCBs
Channel A Target 0 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Curr: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Channel A Target 0 Lun 0 Settings
 Commands Queued 5794608
 Commands Active 0
 Command Openings 253
 Max Tagged Openings 253
 Device Queue Frozen Count 0
Channel A Target 1 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Curr: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Channel A Target 1 Lun 0 Settings
 Commands Queued 6208660
 Commands Active 0
 Command Openings 253
 Max Tagged Openings 253
 Device Queue Frozen Count 0
Channel A Target 2 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Curr: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Channel A Target 2 Lun 0 Settings
 Commands Queued 4897181
 Commands Active 0
 Command Openings 253
 Max Tagged Openings 253
 Device Queue Frozen Count 0
Channel A Target 3 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Curr: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Channel A Target 3 Lun 0 Settings
 Commands Queued 1139811
 Commands Active 0
 Command Openings 253
 Max Tagged Openings 253
 Device Queue Frozen Count 0
Channel A Target 4 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Curr: 160.000MB/s transfers (80.000MHz DT, offset 63, 16bit)
 Channel A Target 4 Lun 0 Settings
 Commands Queued 6227326
 Commands Active 0
 Command Openings 128
 Max Tagged Openings 253
 Device Queue Frozen Count 0
Channel A Target 5 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
 Goal: 40.000MB/s transfers (20.000MHz, offset 32, 16bit)
 Curr: 40.000MB/s transfers (20.000MHz, offset 32, 16bit)
 Channel A Target 5 Lun 0 Settings
 Commands Queued 3688289
 Commands Active 0
 Command Openings 1
 Max Tagged Openings 0
 Device Queue Frozen Count 0
Channel A Target 6 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 7 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 8 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 9 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 10 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 11 Negotiation Settings
 User: 160.000MB/s transfers (80.000MHz DT, offset 255, 16bit)
Channel A Target 12 Negotiation Settings
 User: 160.000MB/s transfers (

Re: Linux DUMP

2002-05-28 Thread Christopher Linn

> "Bernhard R. Erdmann" wrote:
> > 
> > > Which backup program is best? dump, says some people. Elizabeth D. Zwicky
> > > torture tested lots of backup programs. The clear choice for preserving
> > > all your data and all the peculiarities of Unix filesystems is dump, she
> > > stated. Elizabeth created filesystems containing a large variety of
> > > unusual conditions (and some not so unusual ones) and tested each program
> > > by do a backup and restore of that filesystems. The peculiarities
> > > included: files with holes, files with holes and a block of nulls, files
> > > with funny characters in their names, unreadable and unwriteable files,
> > > devices, files that change size during the backup, files that are
> > > created/deleted during the backup and more. She presented the results at
> > > LISA V in Oct. 1991.
> > 
> > This article is archived here:
> > http://berdmann.dyndns.org/doc/dump/zwicky/testdump.doc.html
>
On Mon, May 27, 2002 at 06:02:33PM -0400, the top-poster known as "Uncle George" wrote:
> Ya, but didnt someone post that "DUMP" on linux can fail - if the
> conditions are right? I think is was suggested that SMP systems can
> demonstrate the failure sooner. 
> I think that Mr. Torvolds ( sorry is i mis-spelled) made that comment or
> conclusion. 
> Are there some caveats that need to be added here ?
> /gat

here's an interesting read for anyone doing work in backup and archival
for linux systems:

http://lwn.net/2001/0503/kernel.php3

that article makes it problematic to consider using dump on modern
linux systems.  OTOH, a problem i have had with GNU tar on linux is,
if there is a stale NFS file handle in the area being archived, the
tar will fail, whereas (on e.g. SunOS) dump (ufsdump) does not suffer
these sorts of problems.

chris

-- 
Christopher Linn, <[EMAIL PROTECTED]>| By no means shall either the CEC
Staff System Administrator| or MTU be held in any way liable
  Center for Experimental Computation | for any opinions or conjecture I
Michigan Technological University | hold to or imply to hold herein.



Re: Slow dumper

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 2:59pm, Bartho Saaiman wrote

> I am using:

Are you doing any sort of software RAID over all those disks?  What's your 
CPU speed?

Basically, I would run some tar tests (with gzip -- you can see the exact 
command amanda runs in /tmp/amanda/sendbackup*debug) and see what kind of 
performance you're getting.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Slow dumper

2002-05-28 Thread Ulrik Sandberg

On Tue, 28 May 2002, Joshua Baker-LePain wrote:

> On Tue, 28 May 2002 at 11:09am, Bartho Saaiman wrote
> ...
> > Filesystems Dumped1  1  0
>
> The first thing to do, if at all possible, is to use a holding disk.
> Given that your run time=dump time+tape time, it appears you aren't using
> one.

Wouldn't a single file system always add up to dump time + tape time if a
holding disk *is* in fact used?

--
Ulrik Sandberg





Re: Slow dumper

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 3:14pm, Ulrik Sandberg wrote

> On Tue, 28 May 2002, Joshua Baker-LePain wrote:
> 
> > On Tue, 28 May 2002 at 11:09am, Bartho Saaiman wrote
> > ...
> > > Filesystems Dumped1  1  0
> >
> > The first thing to do, if at all possible, is to use a holding disk.
> > Given that your run time=dump time+tape time, it appears you aren't using
> > one.
> 
> Wouldn't a single file system always add up to dump time + tape time if a
> holding disk *is* in fact used?

Err, yep.  *dope slaps self*  Can I claim NECY[1]?

In that case, it's really a question of optimizing the system, not 
amanda really at all.

[1] Not Enough Coffee Yet

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Linux DUMP

2002-05-28 Thread Christoph Scheeder

no conlusion all people on this list agree to.
you can melt down the discusssion to the following:
1.)linux-ext2-dump is guaranteed to work correct if you have an
completely inactive and sync'ed filesystem. in other words:
if your fs is not mount at all or at least mounted read-only.
in all other situations there is a risk of having file-data and
filesystem-metadata in buffers of the kernel and not on disk.
As linux-ext2-dump avcesses the raw device to get it's images
it bypasses these buffers, and therefor you can/will loose data.
so linux-ext2-dump is not designed to backup active filesystems.

The result of doing that may varry from a completly fine backup-image
over partially garbled files up to a completely useless image,
without getting told about that, as dump thinks all went ok.

And you are right, in my oppionion this should be mentioned at
least in the manpage of ext2-dump in big bold letters.

So i concluded for me not to use ext2-dump for my systems.

but inbetween:
you noticed the version-number of ext2-dump beeing 0.4.xx ?
did you?
this states it as BETA software in develeopment status, not yet
ready to public production-use.

2.) Gnu-tar reads files via the normal filesystem-calls,
and for that has not to worry about unwritten buffers, as it
always gets the correct data from the kernel.
This means you can use gnu-tar for active filesystems, with a few caveats:
a.) some files may get truncatetd/deleted while tar is reading them.
tar will whine and whistle about it, telling you the file
changed while reading it, but your image will be
completely readable, with these files beeing not correct.
b.)gnu-tar changes the accesstime of every file it backups,
so if you need this atime you won't be able to use tar.

i hope this sheds some light on the problem.
Christoph

PS: this is my conlusion to this topic, if your's differs from
 mine, please don't flame me. deciding to use ext2-dump on active
 filesystems is left to you. i won't perue you from not doing it,
 i simply try to show the facts.


Uncle George wrote:

> does this mean that there was a definitive conclusion? 
> 
> Christoph Scheeder wrote:
> 
>>Please not again this discussion...
>>
> 





RE: Linux DUMP

2002-05-28 Thread David Meissner

I think the warnings about dump are well documented in the man pages. The
man page for Solaris ufsdump, for example, recommends running dump in
single-user mode or on unmounted disks. I believe a similar warning is
provided for other versions of dump.

DavidM


-Original Message-
From: Uncle George [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 28, 2002 5:51 AM
Cc: Amanda mailinglist
Subject: Re: Linux DUMP


Sorry, thats a general conclusion to most things in life. 

Is there a situation(s) where DUMP can fail. If yes, why are there no
warning labels ( ie the probability of failure is 1 in 1billion ). If
NO, than can I see the proof that absolutely refutes Mr. Torvolds
statement.
/gat

Its interesting that I was unaware of this dilema ( the possible failure
of DUMP ) until it was posted on this list. Maybe others, as they post
DUMP v. TAR inquiries should also be made aware of this possible
scenario.
I'm also reasonably quite sure that most parents would not contemplate
placing small children ( as well as small adults ) in front seats with
air-bags - now-adays, even though testing proved that air-bags are safe,
and a proven safety feature. 
 
Joshua Baker-LePain wrote:
> > does this mean that there was a definitive conclusion?
> 
> Yup -- use what you are comfortable with and what your testing proves
> works.



Re: Linux DUMP

2002-05-28 Thread Anthony A. D. Talltree

>Its interesting that I was unaware of this dilema ( the possible failure
>of DUMP ) until it was posted on this list

It's mentioned in the second paragraph of Sun's ufsdump man page. 
Despite all the FUD that's been parroted about dump over the years, by
and large it's worked just fine for most people.

Either the Linux kernel and/or the Linux incarnation of dump is
apparently way broken, and Linus (or whoever) seems to be too lazy or
stubborn to fix it.  Tar is simply not a general-purpose replacement for
something that backs up a whole filesystem without trashing the read
dates on files - and in much less time.




amcheck && inetd looping error

2002-05-28 Thread James Shearer

A two-fold newbie (Amanda, Solaris) asks:

I am having some problems making amanda happy as a client on a Solaris 8
box.  In a nutshell, I am seeing the inetd looping error discussed on
the list back in February.  Basically, the problem is this:

1.) I have a Solaris 8 box configured as the tape server with no
*apparent* problems.

2.) I am trying to configure another Solaris 8 box as a client.  The
build of 2.4.2p2 goes just fine, as does the modification and SIGHUPing
of inetd.conf.  'netstat -a | grep -i amanda' verifies that the amanda
service is indeed listening as it should be

3.) When I run 'su amanda -c "/usr/local/sbin/amcheck daily", everything
is a-ok with my configuration except that the client (murrow) host check
fails like this:

WARNING: murrow: selfcheck request timed out.  Host down?

4.) So, I checked /var/adm/messges on murrow to find the following
messages (trimmed):

[ID 858011 daemon.warning] /usr/local/libexec/amandad: Killed
last message repeated 38 times
amanda/udp server failing (looping), service terminated

5.) So it seems that amcheck is pushing more that 40 requests per minute
(the default limit for Solaris inetd) at the amanda service on the
client.  I can accomodate this by making use of the '-r' switch to
inetd.  But, my questions are:

Do I want to do this?  Why is amcheck pushing so many requests at the
client service?  How many requests per minute should I allow for?  And
(total newbie-ness, sorry) how can I restart inetd with the added '-r'
switch properly without *rebooting* the box?  Can it just be started by
hand "safely?"

Thanks for any guidance!

Jim









Re: amcheck fs permissions

2002-05-28 Thread Robert L. Becker

It was a quiet weekend -- Glad to see this list popping again.

A set-up problem that I sent last week is still with me, i.e amcheck
reports "permission denied" for the disks in disklist (client and server
are the same). Since last posting, I've tried changing disklist to specify local disk
access:

raleigh.cpt.afip.org /gilmore nocomp-root -1 local
raleigh.cpt.afip.org /dev/dsk/dks0d4s3 nocomp-root -1 local

I also tried recompile/reinstall (both as root) with

FORCE_USERID=no

set in config.site file.

Neither trick fixed the problem, i.e:

raleigh 37% amcheck -c config_orig

Amanda Backup Client Hosts Check

ERROR: raleigh.cpt.afip.org: [could not access /dev/rdsk/dks0d4s3 (/dev/dsk/dks0d4s3): 
Permission denied]
ERROR: raleigh.cpt.afip.org: [could not access /dev/rdsk/dks0d4s2 (/gilmore): 
Permission denied]
Client check: 1 host checked in 20.043 seconds, 2 problems found

(brought to you by Amanda 2.4.2p2)
raleigh 38%

I see no clear evidence that amandad is the problem -- there have been up
to seven instances of it listed in a single ps -elf request, and there is
no "server timeout" notice when I run both client and server amcheck.

This still smells like an Irix suid problem, but I don't know how to get
at it. The only suid-oriented tweak we've made to the OS was in fixing
a well known security hole (-s bit set on file suid_exec) that shipped
with previous versions of Irix. Current version (6.5) does not even have
that file.

Still hoping that someone can post the solution, or maybe point to info on
how Amanda uses suid so I can track it down more easily. Thanks.

R. Becker


On Fri, 24 May 2002, Robert L. Becker wrote:

> I checked the permissions at the level of the mount point and the block
> device file. Indeed, the block device was more restricted (600) than the
> mount point (755). So, as a test, I set permissions at both entries to 777
> and retried amcheck. No effect. The client check still flags an error:
>
> Amanda Backup Client Hosts Check
> 
> ERROR: raleigh: [could not access /dev/rdsk/dks0d4s3 (/whitmore):
> Permission denied]
>
>
> I wonder if this is a remote host (any client) access problem, though I've
> tried to cover this in ~amanda/.amandahosts:
>
> raleigh 10% cat ~amanda/.amandahosts
> raleigh.cpt.afip.org amanda
> raleigh amanda
>
> I also wonder if current absence of an amandad process (according to ps
> -elf) is a clue -- though I have made the recommended entries in
> /etc/services and /etc/inetd.conf files and saw that as many as two
> amandad processes showed in the ps listing when I stopped/restarted the
> network after editing those files. Looking for suggestions still...
>
> R. Becker
>
>
> On Fri, 24 May 2002, fil krohnengold wrote:
>
> > At Fri, 24 May 2002 14:44:04 EDT, [EMAIL PROTECTED] wrote...
> > : Here's a permissions problem that I don't understand, reported by
> > : amcheck. Client and server are the same host (raleigh):
> > :
> > [...]
> > : Amanda Backup Client Hosts Check
> > : 
> > : ERROR: raleigh: [could not access /dev/rdsk/dks0d4s3 (/whitmore):
> > : Permission denied]
> > [...]
> > :
> > : Amanda is installed as user amanda in group sys. Far as I can tell, the
> > : file systems should be ok for amanda to read. For example:
> > :
> > : raleigh 2% ls -l / | grep whit
> > : drwxr-xr-x3 root sys 24 May 15 08:02 whitmore
> >
> > Significant permissions are set on the raw disk device - under
> > solaris it looks like this (excuse the long lines):
> >
> >   blinky:~> df -k .
> >   Filesystemkbytesused   avail capacity  Mounted on
> >   /dev/dsk/c0t0d0s46705645 1676432 496215726%/local
> >   blinky:~> ls -l /dev/dsk/c0t0d0s4
> >   lrwxrwxrwx   1 root root  46 Dec 30 16:41 /dev/dsk/c0t0d0s4 -> 
>../../devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:e
> >   blinky:~> ls -l /devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:e
> >   brw-rw   1 root sys   65,  4 Jan  9 19:03 
>/devices/pci@1f,0/pci@1,1/ide@3/dad@0,0:e
> >
> > Check to see what the permissions are on /dev/rdsk/dks0d4s3,
> > etc..  That may be your problem.
> >
> > -fil
> > --
> > fil krohnengold
> > systems administrator - IT
> > american museum of natural history
> > [EMAIL PROTECTED]
> >
>
> Robert L. Becker, Jr.
> Col, USAF, MC
> Department of Cellular Pathology
> Armed Forces Institute of Pathology
> Washington, DC 20306-6000
> 301-319-0300
>
>

Robert L. Becker, Jr.
Col, USAF, MC
Department of Cellular Pathology
Armed Forces Institute of Pathology
Washington, DC 20306-6000
301-319-0300





Re: Linux DUMP

2002-05-28 Thread mcguire

This linux-kernel mailing list posting has a short summary and
interesting update on the issue (that I had not seen before, anyway).

http://www.cs.helsinki.fi/linux/linux-kernel/2001-40/1002.htmlhttp://www.cs.helsinki.fi/linux/linux-kernel/2001-40/1002.html

Summary: kernels later than 2.4.11 are okey-dokey.

"Anthony A. D. Talltree" <[EMAIL PROTECTED]> wrote:
 >>Its interesting that I was unaware of this dilema ( the possible failure
 >>of DUMP ) until it was posted on this list
 >
 >It's mentioned in the second paragraph of Sun's ufsdump man page. 
 >Despite all the FUD that's been parroted about dump over the years, by
 >and large it's worked just fine for most people.
 >
 >Either the Linux kernel and/or the Linux incarnation of dump is
 >apparently way broken, and Linus (or whoever) seems to be too lazy or
 >stubborn to fix it.  Tar is simply not a general-purpose replacement for
 >something that backs up a whole filesystem without trashing the read
 >dates on files - and in much less time.
 >


Tommy McGuire



Re: amcheck && inetd looping error

2002-05-28 Thread mcguire

As I recall, that problem can result from a problem in the client
amandad configuration, too.  Amcheck should not be sending large
numbers of requests.  Is there anything in the client's debug files
(/tmp/amanda/*)?

James Shearer <[EMAIL PROTECTED]> wrote:
 >A two-fold newbie (Amanda, Solaris) asks:
 >
 >I am having some problems making amanda happy as a client on a Solaris 8
 >box.  In a nutshell, I am seeing the inetd looping error discussed on
 >the list back in February.  Basically, the problem is this:
 >
 >1.) I have a Solaris 8 box configured as the tape server with no
 >*apparent* problems.
 >
 >2.) I am trying to configure another Solaris 8 box as a client.  The
 >build of 2.4.2p2 goes just fine, as does the modification and SIGHUPing
 >of inetd.conf.  'netstat -a | grep -i amanda' verifies that the amanda
 >service is indeed listening as it should be
 >
 >3.) When I run 'su amanda -c "/usr/local/sbin/amcheck daily", everything
 >is a-ok with my configuration except that the client (murrow) host check
 >fails like this:
 >
 >  WARNING: murrow: selfcheck request timed out.  Host down?
 >
 >4.) So, I checked /var/adm/messges on murrow to find the following
 >messages (trimmed):
 >
 >[ID 858011 daemon.warning] /usr/local/libexec/amandad: Killed
 >last message repeated 38 times
 >amanda/udp server failing (looping), service terminated
 >
 >5.) So it seems that amcheck is pushing more that 40 requests per minute
 >(the default limit for Solaris inetd) at the amanda service on the
 >client.  I can accomodate this by making use of the '-r' switch to
 >inetd.  But, my questions are:
 >
 >Do I want to do this?  Why is amcheck pushing so many requests at the
 >client service?  How many requests per minute should I allow for?  And
 >(total newbie-ness, sorry) how can I restart inetd with the added '-r'
 >switch properly without *rebooting* the box?  Can it just be started by
 >hand "safely?"
 >
 >Thanks for any guidance!
 >
 >Jim
 >
 >
 >
 >
 >
 >


Tommy McGuire



Re: amcheck && inetd looping error

2002-05-28 Thread Doug Silver

Hi James -

I had the same problem back in June on a FreeBSD box -- the problem turned
out to be the tcp wrapper file (hosts.allow).  I forgot to add the entry
for amandad.   

Also check here:
http://amanda.sourceforge.net/fom-serve/cache/140.html

HTH!

-doug

-- 
~~
Doug Silver
Network Manager
Urchin Corporation  http://www.urchin.com
~~

On 28 May 2002, James Shearer wrote:

> A two-fold newbie (Amanda, Solaris) asks:
> 
> I am having some problems making amanda happy as a client on a Solaris 8
> box.  In a nutshell, I am seeing the inetd looping error discussed on
> the list back in February.  Basically, the problem is this:
> 
> 1.) I have a Solaris 8 box configured as the tape server with no
> *apparent* problems.
> 
> 2.) I am trying to configure another Solaris 8 box as a client.  The
> build of 2.4.2p2 goes just fine, as does the modification and SIGHUPing
> of inetd.conf.  'netstat -a | grep -i amanda' verifies that the amanda
> service is indeed listening as it should be
> 
> 3.) When I run 'su amanda -c "/usr/local/sbin/amcheck daily", everything
> is a-ok with my configuration except that the client (murrow) host check
> fails like this:
> 
>   WARNING: murrow: selfcheck request timed out.  Host down?
> 
> 4.) So, I checked /var/adm/messges on murrow to find the following
> messages (trimmed):
> 
> [ID 858011 daemon.warning] /usr/local/libexec/amandad: Killed
> last message repeated 38 times
> amanda/udp server failing (looping), service terminated
> 
> 5.) So it seems that amcheck is pushing more that 40 requests per minute
> (the default limit for Solaris inetd) at the amanda service on the
> client.  I can accomodate this by making use of the '-r' switch to
> inetd.  But, my questions are:
> 
> Do I want to do this?  Why is amcheck pushing so many requests at the
> client service?  How many requests per minute should I allow for?  And
> (total newbie-ness, sorry) how can I restart inetd with the added '-r'
> switch properly without *rebooting* the box?  Can it just be started by
> hand "safely?"
> 
> Thanks for any guidance!
> 
> Jim
> 
> 
> 
> 
> 
> 




retain dump files on holding disk

2002-05-28 Thread Nick Russo

Couldn't find this anywhere in the FAQ-O-Matic, but I'd welcome
a pointer if an answer is already floating around somewhere.

I'd like to keep all the dump images on disk even after they
get written to tape. My reasoning is that a large percentage
of the restore requests I get from my users could be satisfied
by files from the last day or two. If those files were still
on disk, I could restore them without carrying a tape down to
the machine room ;)

Of course, I can't keep the files around indefinitely, so I
imagine them getting rotated out after one or two days, depending
on how much disk space I have to spare on the holding disk.

Does amanda have a built-in way to do this, or does anyone have
a hack to accomplish something close?

Thanks,
Nick


 Nick Russo   email: [EMAIL PROTECTED]   phone: 773.702.3438
Computer Science Department   The University of Chicago
 Associate Director of Computing Systems, Systems Lead




Re: Linux DUMP

2002-05-28 Thread C. Chan

Also Sprach Anthony A. D. Talltree:

> >Its interesting that I was unaware of this dilema ( the possible failure
> >of DUMP ) until it was posted on this list
>
> It's mentioned in the second paragraph of Sun's ufsdump man page.
> Despite all the FUD that's been parroted about dump over the years, by
> and large it's worked just fine for most people.
>

I haven't had any significant problems with ufsdump on Solaris,
xfsdump on Solaris, vdump on Tru64 or backup on AIX. Although it
is not quite clear to me whether xfsdump, vdump, and backup are
file level or block level utilities. Sometimes an active file doesn't
make it but I haven't had a totally corrupt dump image.

However, I am able to do backups at a time when the filesystems are
mostly quiescent, I don't run a 24x7 mail server with hundreds or thousands
of users for example. Is anyone actually using dump to backup such
a partition without taking the partition offline? The admins I speak
with who have such systems usually use snapshotting or mirroring,
quiescing and breaking the mirror, and then using dump.

> Either the Linux kernel and/or the Linux incarnation of dump is
> apparently way broken, and Linus (or whoever) seems to be too lazy or
> stubborn to fix it.

You left out the possibilities of incompetence and/or cross/recursive
linkages in their phylogenetic trees.

> Tar is simply not a general-purpose replacement for
> something that backs up a whole filesystem without trashing the read
> dates on files - and in much less time.
>

In particular, trying to do incremental backups using a file level
util on partitions with files that continually grow like mail spools
or syslogs.

-- 
C. Chan <[EMAIL PROTECTED]>
GPG Public Key: finger [EMAIL PROTECTED]




Re: Linux DUMP

2002-05-28 Thread Michael Hicks

[EMAIL PROTECTED] wrote:
>
> This linux-kernel mailing list posting has a short summary and
> interesting update on the issue (that I had not seen before, anyway).

I believe some of the problems with Linux ext2 dump can be avoided by using
Linux's Logical Volume Management and the snapshot capability.  However, I
don't know how stable that code is..

You need to have your system running LVM, but once it is, you can create a
snapshot partition.  The kernel stops any data from being written to the
normal partition and writes any changes to the smaller snapshot area.  The
filesystem can then be backed up, even while it appears to be in use.  Once
the backup completes, the snapshot data can be applied to the regular
partition..

I don't know how you'd get something like this to work in Amanda, though.. 
I suspect there needs to be some client-side scripting ability for it to
work properly.

-- 
Mike Hicks   [mailto:[EMAIL PROTECTED]]
   Unix Support Assistant| Carlson School of Management
Office: 1-160  Phone: 6-7909 |   University of Minnesota



msg12457/pgp0.pgp
Description: PGP signature


Security Issue

2002-05-28 Thread Tom Beer

Hi, 

reffering to the mentioned Advisories I would like to know what
the latest stable version of Amanda is, that is not affected.
I thought that 2.4.2p2 is the latest, as mentioned a week or
so ago on this list. Below, only 2.3.0.4 is mentioned. But this
wasn't shipped with FreeBSD 4.5. 

Thanks for info, a confused Tom

http://online.securityfocus.com/archive/1/274215

Package:  AMANDA
Version:  2.3.0.4
Date: 26/05/2002
Issue:Local and remote overflows
Risk: Medium since this is an old package
Credits:  zillion[at]safemode.org
  http://www.safemode.org
  http://www.snosoft.com

The Advanced Maryland Automatic Network Disk Archiver (AMANDA) is
a backup system which is available for many different Unix-based
operating systems. Several setuid and setgid binaries which are
installed by this package contain buffer overflow vulnerabilities
that can be used to execute shellcode with elevated privileges.
Additionally, the amindexd daemon contains a remote overflow bug
that can lead to a remote system compromise.

The affected version of AMANDA is an old package but is often used
due to compatibility problems with newer versions. For example,
this package was until recently shipped with the FreeBSD 4.5 ports
collection.






Re: amcheck && inetd looping error

2002-05-28 Thread James Shearer


To follow up on my eariler problem, the checklist Doug pointed me to at 

http://amanda.sourceforge.net/fom-serve/cache/140.html 

was most helpful.  It turns out that libreadline was not seen by the
amanda user on the client, and thus the amandad daemon was croaking.  I
simply made sure that the shared libraries were in the amanda users
library path and it all works well now.

Thanks to both Tom and Doug for your useful suggestions!

On Tue, 2002-05-28 at 12:30, James Shearer wrote:

> I am having some problems making amanda happy as a client on a Solaris 8
> box.  In a nutshell, I am seeing the inetd looping error discussed on
> the list back in February.  Basically, the problem is this:






Re: Security Issue

2002-05-28 Thread John Cartwright

On Tue, May 28, 2002 at 08:19:27PM +0200, Tom Beer wrote:
> reffering to the mentioned Advisories I would like to know what
> the latest stable version of Amanda is, that is not affected.
> I thought that 2.4.2p2 is the latest, as mentioned a week or
> so ago on this list. Below, only 2.3.0.4 is mentioned. But this
> wasn't shipped with FreeBSD 4.5. 

Tom, this is what I was referring to in my post yesterday.
I assume that 2.4.2p2 is 'safe', but it would be good to have
the official word on this ...

- John



Re: Security Issue

2002-05-28 Thread Mitch Collinsworth


Removing their 2.3.0.4 port was the right thing for FreeBSD to do.
That version is what, 4 years old or more?  I'd challenge the
"often used" assertion in the announcement.  If it's often used
it's only because folks like FreeBSD have been shipping it long
after it should have been replaced with a 2.4.x version.

If you look at amanda's download page at:
http://www.amanda.org/download.html

you'll se 2.4.2p2 is the latest "stable release" and that there
have been various development releases since then.

-Mitch

On Tue, 28 May 2002, Tom Beer wrote:

> Hi,
>
> reffering to the mentioned Advisories I would like to know what
> the latest stable version of Amanda is, that is not affected.
> I thought that 2.4.2p2 is the latest, as mentioned a week or
> so ago on this list. Below, only 2.3.0.4 is mentioned. But this
> wasn't shipped with FreeBSD 4.5.
>
> Thanks for info, a confused Tom
>
> http://online.securityfocus.com/archive/1/274215
>
> Package:  AMANDA
> Version:  2.3.0.4
> Date: 26/05/2002
> Issue:Local and remote overflows
> Risk: Medium since this is an old package
> Credits:  zillion[at]safemode.org
>   http://www.safemode.org
>   http://www.snosoft.com
>
> The Advanced Maryland Automatic Network Disk Archiver (AMANDA) is
> a backup system which is available for many different Unix-based
> operating systems. Several setuid and setgid binaries which are
> installed by this package contain buffer overflow vulnerabilities
> that can be used to execute shellcode with elevated privileges.
> Additionally, the amindexd daemon contains a remote overflow bug
> that can lead to a remote system compromise.
>
> The affected version of AMANDA is an old package but is often used
> due to compatibility problems with newer versions. For example,
> this package was until recently shipped with the FreeBSD 4.5 ports
> collection.
>
>
>
>




Re: Bad magic number in super-block

2002-05-28 Thread Andre Gauthier

Hi thanks for your reply, yes I tried taring and untaring and it works.

Chris Marble wrote:
> 
> Andre Gauthier wrote:
> >
> > I have recently installed Amanda on Red Hat 7.2 with ext3 fS. I am using
> > tar-1.13.25-4. I complied with group=backup. Ran amcheck and it did not
> > report any errors, but when I ran amdump it did not create an index or a
> > backup. In the debug logs I got permission denied while opening
> > filesystem. I recompiled with user=disk, and I got the same error
> > message. Then chmod o+r on the /dev/sdg1 the filesystem in question as
> > an experiment, it no longer had permission denied
> > but rather the error message /dev/sdg1: Bad magic number in super-block
> > while opening filesystem.  The filesystem is on the local host.
> 
> Are you sure you have ext3-aware versions of all the ufs tools?
> Can you do a simple tar of the filesystem to tape (without amanda)?
> --
>   [EMAIL PROTECTED] - HMC UNIX Systems Manager



backup over the network

2002-05-28 Thread jean

Hi,
I am very new to amanda and this group.  I have read the faq-o-matic list 
but couldn't find the answer so I think I will ask here.
We have amanda 2.4.1 running on a Linux system.  It is working fine so 
far.  Currently it only  backs up the server that it sits on to a tape 
everyday.   Now we like to add one more machine to be backup on it.  I 
would like to have some information as how to change the current 
configuration so that it will go to the IP of that "new' machine and  back 
up its content on the same tape.
I hope I don't have to re-install the program again.  Thanx in advance.

regards,
Jean 




Re: Running out of tape due to full dumps

2002-05-28 Thread Bradley Marshall

On Tue, May 28, 2002 at 08:11:27AM -0400, Joshua Baker-LePain wrote:
> Why not define a slightly smaller tapelength?  That way amanda won't try 
> to put too much on the tape.
> 
> Are you using hardware compression?  You may be assuming too much 
> compression.

I'm actually using software compression for some reason.  I'll drop
down the tapelength and see how that goes.

Thanks,
Brad
-- 
+=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
|Brad Marshall|   Plugged In Software|
|Senior Systems Administrator | http://www.pisoftware.com|
|mailto:[EMAIL PROTECTED]   |  GPG Key Id: 47951BD0 / 1024b|
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=+
 Fingerprint:  BAE3 4794 E627 2EAF 7EC0  4763 7884 4BE8 4795 1BD0



Re: Running out of tape due to full dumps

2002-05-28 Thread Joshua Baker-LePain

On Wed, 29 May 2002 at 9:02am, Bradley Marshall wrote

> On Tue, May 28, 2002 at 08:11:27AM -0400, Joshua Baker-LePain wrote:
> > Why not define a slightly smaller tapelength?  That way amanda won't try 
> > to put too much on the tape.
> > 
> > Are you using hardware compression?  You may be assuming too much 
> > compression.
> 
> I'm actually using software compression for some reason.  I'll drop
> down the tapelength and see how that goes.

How much are you getting on tape, and how much do you expect to get on 
tape?  You can tell how much successfully made it to tape by looking at 
the "taper:" line in the NOTES section of the amanda report.  If the value 
is a lot smaller than you expect it to be, there may be other issues.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University





Re: backup over the network

2002-05-28 Thread Joshua Baker-LePain

On Tue, 28 May 2002 at 3:45pm, jean wrote

> but couldn't find the answer so I think I will ask here.
>   We have amanda 2.4.1 running on a Linux system.  It is working fine so 
> far.  Currently it only  backs up the server that it sits on to a tape 
> everyday.   Now we like to add one more machine to be backup on it.  I 
> would like to have some information as how to change the current 
> configuration so that it will go to the IP of that "new' machine and  back 
> up its content on the same tape.
>   I hope I don't have to re-install the program again.  Thanx in advance.

You'll need to compile, install, and configure amanda on the new machine 
you want to backup.  Then, add its hostname and the partitions/directories 
you want to backup to the disklist on the server.  Details in 
docs/INSTALL, the "chapter" at www.backupcentral.com, F-O-M, the archives 
of this list, and our brains which you're welcome to pick after you've 
exhausted everything else.  :)

Good luck.

-- 
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University




Re: Running out of tape due to full dumps

2002-05-28 Thread Bradley Marshall

On Tue, May 28, 2002 at 08:03:31PM -0400, Joshua Baker-LePain wrote:
> On Wed, 29 May 2002 at 9:02am, Bradley Marshall wrote
> > I'm actually using software compression for some reason.  I'll drop
> > down the tapelength and see how that goes.
> How much are you getting on tape, and how much do you expect to get on 
> tape?  You can tell how much successfully made it to tape by looking at 
> the "taper:" line in the NOTES section of the amanda report.  If the value 
> is a lot smaller than you expect it to be, there may be other issues.

Its a DDS3 tape, and runs out when it tries to flush about 16G
(30G uncompressed).  Most nights do nowhere near this much,
its only when it tries to do a full dump of certain partitions.
I'll see how things go with the tapelength set down to 12G.

Thanks,
Brad
-- 
+=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
|Brad Marshall|   Plugged In Software|
|Senior Systems Administrator | http://www.pisoftware.com|
|mailto:[EMAIL PROTECTED]   |  GPG Key Id: 47951BD0 / 1024b|
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=+
 Fingerprint:  BAE3 4794 E627 2EAF 7EC0  4763 7884 4BE8 4795 1BD0



Getting My Data Back Off The Tape

2002-05-28 Thread GIC MLs

Hi,

Ok, so I have finally got Amanda to write a partition to tape.
Now I want to grab a file off of the tape.

I'm using Amanda-2.4.2p2 on FreeBSD.

After a successful amcheck and amdump, I run

# mt rewind
# mt fsf 001

to position the tape at the beginning of where I wrote daily001.
I'm following the instructions in Unix Backup & Recovery here - Chapter 4
page 183.

I have backed up /usr on the client machine, and in /usr I have a file
called usr.dump that I would like to recover. I am trying to do this with
the following command:

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf usr.dump

... but I get:

/usr/local/bin/gtar: usr.dump: Cannot open: (null)
/usr/local/bin/gtar: Error is not recoverable: exiting now

What am I doing wrong here? Am I using the gtar command incorrectly?
I read through the tar manpage trying to find my mistake, but can't seem to
see what I'm doing wrong.

Any advice appreciated,

Shawn




Re: Getting My Data Back Off The Tape

2002-05-28 Thread Mitch Collinsworth


On Wed, 29 May 2002, GIC MLs wrote:

> # dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf usr.dump
>
> ... but I get:
>
> /usr/local/bin/gtar: usr.dump: Cannot open: (null)
> /usr/local/bin/gtar: Error is not recoverable: exiting now
>
> What am I doing wrong here? Am I using the gtar command incorrectly?
> I read through the tar manpage trying to find my mistake, but can't seem to
> see what I'm doing wrong.

Yeah, it's right here in the SYNOPSIS section of the man page:

SYNOPSIS
   tar  [  -  ]  A  --catenate --concatenate | c --create | d
   --diff --compare | r --append | t --list | u --update |  x
   -extract --get [ --atime-preserve ] [ -b, --block-size N ]
   [ -B, --read-full-blocks ]  [  -C,  --directory  DIR  ]  [
   --checkpoint ]  [ -f, --file [HOSTNAME:]F ] [ --force-
   local   ] [ -F, --info-script F --new-volume-script F ]  [
   -G,  --incremental  ] [ -g, --listed-incremental F ] [ -h,
   --dereference ] [ -i, --ignore-zeros ] [ -I,  --bzip  ]  [
   --ignore-failed-read  ]  [  -k,  --keep-old-files  ] [ -K,
   --starting-file F  ]  [  -l,  --one-file-system  ]  [  -L,
   --tape-length  N  ]  [  -m,  --modification-time  ]  [ -M,
   --multi-volume ] [ -N, --after-date DATE, --newer DATE ] [
   -o,  --old-archive,  --portability ] [ -O, --to-stdout ] [
   -p, --same-permissions,  --preserve-permissions  ]  [  -P,
   --absolute-paths ] [ --preserve  ] [ -R, --record-num­
   ber ] [ --remove-files ] [ -s,  --same-order,  --preserve-
   order  ]  [ --same-owner ] [ -S, --sparse ] [ -T, --files-
   from F ] [ --null ] [ --totals   ] [ -v, --verbose ] [
   -V,  --label  NAME  ]  [ --version  ] [ -w, --interactive,
   --confirmation ] [ -W, --verify] [ --exclude FILE ]  [
   -X, --exclude-from FILE ] [ -Z, --compress, --uncompress ]
   [ -z, --gzip,  --ungzip   ]  [  --use-compress-program
   PROG ] [ --block-compress ] [ -[0-7][lmh] ]

   filename1 [ filename2, ... filenameN ]

   directory1 [ directory2, ...directoryN ]


Ok, see it now?  (Neither do I  :-)  But you need a - after the -xf,
otherwise tar is trying to read its input from usr.dump, which it can't
find.  a - after the -f option says to read from stdin.  So try changing
that to read:

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf - usr.dump

and let us know how it goes.

-Mitch




help on amanda is urgently needed

2002-05-28 Thread Wong Ching Kuen Frederick

i got amanda setup and running on a redhat 7.1 system. the tape drive is  a
CONNER CTT8000-A (i.e. a tr-4 tape drive). i use amcheck and everything
works fine. however, when i try to use amdump, all data is stored in the
holding disk, but fail to transfer to the tape. below is my config file and
the amstatus report. it keeps showing writing to tape, but nothing has been
done. please advice. your help is very much appreciated. thanks.

***

amstatus:

Using /usr/local/amanda/etc/WeeklySet/log/amdump from Wed May 29 11:05:28
HKT 2002

localhost:/etc   1  13k writing to tape
(11:05:39)

SUMMARY  part real estimated
  size  size
partition   :   1
estimated   :   1 95k
flush   :   00k
failed  :   0  0k   (  0.00%)
wait for dumping:   0  0k   (  0.00%)
dumping to tape :   0  0k   (  0.00%)
dumping :   00k0k (  0.00%) (  0.00%)
dumped  :   1   13k   95k ( 13.68%) ( 13.68%)
wait for writing:   00k0k (  0.00%) (  0.00%)
wait to flush   :   00k0k (100.00%) (  0.00%)
writing to tape :   1   13k   95k ( 13.68%) ( 13.68%)
failed to tape  :   00k0k (  0.00%) (  0.00%)
taped   :   00k0k (  0.00%) (  0.00%)
4 dumpers idle  : not-idle
taper writing, tapeq: 0
network free kps:1
holding space   :  8191954k (100.00%)
 dumper0 busy   :  0:00:00  (  2.58%)
 0 dumpers busy :  0:00:00  (  0.00%)
 1 dumper busy  :  0:00:00  (  2.76%)

*

amanda.conf:

org "WeeklySet"
mailto "root"
dumpuser "amanda"

inparallel 4
netusage  6000

dumpcycle 4 weeks
runspercycle 4
tapecycle 5 tapes

bumpsize 20 Mb
bumpdays 1
bumpmult 4

etimeout 300

runtapes 1
tapedev "/dev/nht0"

tapetype TR-4
labelstr "^WeeklySet[0-9][0-9]*$"

holdingdisk tmp {
comment "main holding disk"
directory "/tmp"
use 8000 Mb
}

infofile "/usr/local/amanda/etc/WeeklySet/curinfo"  # database DIRECTORY
logdir   "/usr/local/amanda/etc/WeeklySet/log"  # log directory
indexdir "/usr/local/amanda/etc/WeeklySet/index"# index directory

define tapetype TR-4 {
comment "Imation Travan TR-4 minicartridge (as per tapetype)"
length 3970 mbytes
filemark 0 kbytes
speed 505 kps
}

define dumptype global {
comment "Global definitions"
}

define dumptype dump-backup {
global
comment "Full backup with fast compression in client"
compress client fast
index
}

define dumptype tar-backup {
global
comment "Full backup with fast compression in client"
compress client fast
program "GNUTAR"
index
}

define interface eth0 {
comment "100 Mbps ethernet"
use 4000 kbps
}




Re: Linux DUMP

2002-05-28 Thread Uncle George

Gee fella's, i didnt mean this to be a slam fest ( or was that a 'dis
fest ).

 I just think, particularly on this list, that there are caveats out
there AND FOLKS ON THIS LIST would be the best ones to know about such
things. There is one for dump on linux, and it appears that there might
be other gotcha's out there. But I'd really like to know about such
things before I need to REALLY do a restore - which is, as someone
pointed out before, not the best time to figure out that your backups
was not faithfully done.

 

"C. Chan" wrote:
> 
> Also Sprach Anthony A. D. Talltree:
> 
> > >Its interesting that I was unaware of this dilema ( the possible failure
> > >of DUMP ) until it was posted on this list
> >
> > It's mentioned in the second paragraph of Sun's ufsdump man page.



Re: Getting My Data Back Off The Tape

2002-05-28 Thread GIC MLs
> Yeah, it's right here in the SYNOPSIS section of the man page:
>
> SYNOPSIS
>tar  [  -  ]  A  --catenate --concatenate | c --create | d



> Ok, see it now?  (Neither do I  :-)

:-)))

> But you need a - after the -xf,
> otherwise tar is trying to read its input from usr.dump, which it can't
> find.  a - after the -f option says to read from stdin.  So try changing
> that to read:
>
> # dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf - usr.dump
>
> and let us know how it goes.
>
> -Mitch

Ok, that is exactly what I was missing - thanks!

Now my problem is this...

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf - usr.dump
0+0 records in
0+0 records out
0 bytes transferred in 0.040918 secs (0 bytes/sec)
/usr/local/bin/gtar: usr.dump: Not found in archive
/usr/local/bin/gtar: Error exit delayed from previous errors

Hmm... usr.dump *does* exist in /usr on the client... wonder why it didn't
get anything...
I tried to recover another file which exists on the client as
/usr/home/user/amanda-2.4.2p2.tar.gz

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf -
amanda-2.4.2p2.tar.gz
53741+0 records in
53741+0 records out
1760985088 bytes transferred in 594.278274 secs (2963233 bytes/sec)
/usr/local/bin/gtar: amanda-2.4.2p2.tar.gz: Not found in archive
/usr/local/bin/gtar: Error exit delayed from previous errors

This time it looks like there was something there, data transferred... but
then why would it say "Not found in archive?"
And if the data transferred, where did it go to? Not to the directory wher I
ran the dd command from... Did it get sent back to the client automagically?

Thanks,

Shawn


Re: backup over the network

2002-05-28 Thread Kevin Hancock

> > I hope I don't have to re-install the program again.  Thanx in advance.
>
> You'll need to compile, install, and configure amanda on the new machine

You really only need to install amanda client you do not need the server.

You are using Linux, if it is a RPM system just install amanda-common and 
amanda-client packages. Restart (x)inetd, add the amanda server to 
.amandahosts and the client is done. 

You may have to edit hosts.allow and any firewall scripts, depends how you do 
your security.

Edit disklist on server and run amcheck

I was amazed at how easy it is to add clients.


Kevin



Re: Getting My Data Back Off The Tape

2002-05-28 Thread Nick Russo

On Wed, 29 May 2002, GIC MLs wrote:

> Now my problem is this...
> 
> # dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf - usr.dump

First make a scratch directory like this:
  mkdir /var/tmp/testing/
Then, from inside that directory, run the same dd command, but
with "gtar -tvf -" so you can see exactly which files got into
the archive. This also shows what the file names look like. They
won't start with a slash, because tar strips those, but they
probably will start with ./ depending on how tar was called.

Ultimately, you'll probably need to use "gtar -xf - ./usr.dump"

> And if the data transferred, where did it go to? Not to the directory wher I
> ran the dd command from... Did it get sent back to the client automagically?

I wouldn't take the chance of overwritting something accidentally.
That's why I suggest you run the dd and tar commands from within
a new, empty directory.


 Nick Russo   email: [EMAIL PROTECTED]   phone: 773.702.3438
Computer Science Department   The University of Chicago
 Associate Director of Computing Systems, Systems Lead






Re: Getting My Data Back Off The Tape

2002-05-28 Thread Jon LaBadie

On Tue, May 28, 2002 at 10:46:01PM -0400, Mitch Collinsworth wrote:
> 
> On Wed, 29 May 2002, GIC MLs wrote:
> 
> > # dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf usr.dump
> >
> > ... but I get:
> >
> > /usr/local/bin/gtar: usr.dump: Cannot open: (null)
> > /usr/local/bin/gtar: Error is not recoverable: exiting now
> >
> > What am I doing wrong here? Am I using the gtar command incorrectly?
> > I read through the tar manpage trying to find my mistake, but can't seem to
> > see what I'm doing wrong.
> 
> Yeah, it's right here in the SYNOPSIS section of the man page:
>
> Ok, see it now?  (Neither do I  :-)  But you need a - after the -xf,

Yeah, it is :))

> SYNOPSIS
>tar  [  -  ]  A  --catenate --concatenate | c --create | d
>--diff --compare | r --append | t --list | u --update |  x
>-extract --get [ --atime-preserve ] [ -b, --block-size N ]
>[ -B, --read-full-blocks ]  [  -C,  --directory  DIR  ]  [
>--checkpoint ]  [ -f, --file [HOSTNAME:]F ] [ --force-

   ^^^

The gtar options being used are -x and -f (-xf).

The -f option, aka --file, takes a required argument "F" (required
is noted by the absence of [...] with an optional ([...] present)
prefix of "HOSTNAME:".  I'm sure the required argument "F" is explained
later as the name of the tarball file.  There is no tarball file, the
data is coming from standard input (the pipe).  But the argument F is
still required.

It is a common unix convention to use "-" as a file name placeholder
meaning "standard input".  For example:  "ls | cat foo - bar" puts
the ls output between the contents of the files foo and bar.


-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)



Re: Getting My Data Back Off The Tape

2002-05-28 Thread GIC MLs

Thanks very much for the help...

> First make a scratch directory like this:
>   mkdir /var/tmp/testing/
> Then, from inside that directory, run the same dd command, but
> with "gtar -tvf -" so you can see exactly which files got into
> the archive. This also shows what the file names look like. They
> won't start with a slash, because tar strips those, but they
> probably will start with ./ depending on how tar was called.
>
> Ultimately, you'll probably need to use "gtar -xf - ./usr.dump"
>
> > And if the data transferred, where did it go to? Not to the directory
wher I
> > ran the dd command from... Did it get sent back to the client
automagically?
>
> I wouldn't take the chance of overwritting something accidentally.
> That's why I suggest you run the dd and tar commands from within
> a new, empty directory.

I had originally make the scratch directory so as to avoid problems, and had
been running the command from inside that directory.
I followed your advice and ran the command with the -tvf - flags, which
seemed to only give me a listing of directories.:

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -tvf -

I grepped this for amanda, but only got the directories containing amanda
then, of course, so finally I got on the ball and :

# dd if=/dev/nrsa0 bs=32k skip=1 | /usr/local/bin/gtar -xf -
./home/user/amanda-2.4.2p2.tar.gz

which gave me just what I wanted - THANKS!

John LaBadie wrote:

> It is a common unix convention to use "-" as a file name placeholder
> meaning "standard input".  For example:  "ls | cat foo - bar" puts
> the ls output between the contents of the files foo and bar.

Didn't know that - that was one of the things that was confusing me, thanks
for the clarification.

Cheers much,

Shawn




Re: retain dump files on holding disk

2002-05-28 Thread Don Wolski

At 11:58 AM 5/28/02 -0500, Nick Russo wrote:
>I'd like to keep all the dump images on disk even after they
>get written to tape. My reasoning is that a large percentage
>of the restore requests I get from my users could be satisfied
>by files from the last day or two. 

Another reason for retaining dump images would be to make a second backup
tape in case writing one of the tapes is to be it's last use before errors
occur, and so that one copy could be kept off-site (in case the disaster
that destroys your disks also destroys your tape copies kept on-site).

So I would add to Nick's question, can the retained dump images be taped a
second time, hopefully in a way that allows amanda to record the fact that
the images exists on two different tapes.

thanks,
/don

Don Wolskim/s, Natural Resources 211c
Unix System Administrator Information Technology Unit
[EMAIL PROTECTED] College of NR and Sciences
707-826-3536 (voice)  Humboldt State University
707-826-3501 (fax)    Arcata, CA 95521-8299 



Re: Linux DUMP

2002-05-28 Thread Christopher Odenbach


Hi,

> You need to have your system running LVM, but once it is, you can
> create a snapshot partition.  The kernel stops any data from being
> written to the normal partition and writes any changes to the smaller
> snapshot area.  The filesystem can then be backed up, even while it
> appears to be in use.  Once the backup completes, the snapshot data
> can be applied to the regular partition..
>
> I don't know how you'd get something like this to work in Amanda,
> though.. I suspect there needs to be some client-side scripting
> ability for it to work properly.

Up to now - yes. We are just having a discussion over this matter on 
amanda-hackers. Just look in the archives.

Christopher


-- 
==
Dipl.-Ing. Christopher Odenbach
HNI Rechnerbetrieb
[EMAIL PROTECTED]
Tel.: +49 5251 60 6215
==