Partial results in planner

2007-01-05 Thread David Stillion
I did my first backup after installing Amanda and all seemed to go well except 
there was only a little bit of data that was actually backed up.  

It appears that the planner was unable to determine the amount of data on the 
3 disks in the backup set.  Here is an excerpt from the amdump log file:

SETTING UP FOR ESTIMATES...
planner: time 0.002: setting up estimates for buster.zone64.net:/dev/hda2
buster.zone64.net:/dev/hda2 overdue 13519 days for level 0
setup_estimate: buster.zone64.net:/dev/hda2: command 0, options: none
last_level -1 next_level0 -13519 level_days 0getting estimates 0 (-2) -1 
(-2) -1 (-2)
planner: time 0.003: setting up estimates for buster.zone64.net:/dev/hda3
buster.zone64.net:/dev/hda3 overdue 13519 days for level 0
setup_estimate: buster.zone64.net:/dev/hda3: command 0, options: none
last_level -1 next_level0 -13519 level_days 0getting estimates 0 (-2) -1 
(-2) -1 (-2)
planner: time 0.003: setting up estimates for buster.zone64.net:/dev/hda6
buster.zone64.net:/dev/hda6 overdue 13519 days for level 0
setup_estimate: buster.zone64.net:/dev/hda6: command 0, options: none
last_level -1 next_level0 -13519 level_days 0getting estimates 0 (-2) -1 
(-2) -1 (-2)
planner: time 0.003: setting up estimates took 0.000 secs

GETTING ESTIMATES...
driver: pid 4883 executable /usr/libexec/driver version 2.4.5
driver: tape size 4096000
driver: send-cmd time 0.005 to taper: START-TAPER 20070105
driver: adding holding disk 0 dir /data/amanda/tmp/dumps size 35059852 
chunksize 512000
reserving 35059852 out of 35059852 for degraded-mode dumps
driver: started dumper0 pid 4885
driver: started dumper1 pid 4886
driver: started dumper2 pid 4887
planner: time 0.049: got partial result for host buster.zone64.net 
disk /dev/hda6: 0 -> -2K, -1 -> -2K, -1 -> -2K
planner: time 0.050: got partial result for host buster.zone64.net 
disk /dev/hda3: 0 -> -2K, -1 -> -2K, -1 -> -2K
planner: time 0.050: got partial result for host buster.zone64.net 
disk /dev/hda2: 0 -> -2K, -1 -> -2K, -1 -> -2K

So far I've found this issue talked about by one other user.  She said it was 
a problem with the version of tar she was using.  She is running RHEL4.   I'm 
running Gentoo 2006.0 on both my server and workstation.   Has anyone else 
run into this problem?  

Let me know and thanks. 


Re: Sendsize Timeout Errors

2007-01-05 Thread Kevin Till

John E Hein wrote:


This has been working for me for the past 4+ years.  But if I ever
start hitting the ~64 KiB udp per socket limit, something else will
have to be tried (as described in the above message).


As of Amanda 2.5.1, we have added bsdtcp auth which uses tcp exclusively. As a result, UDP 
packet size limitation is eliminated.


Reference: 
http://wiki.zmanda.com/index.php/Configuring_bsd/bsdudp/bsdtcp_authentication

--
Thank you!
Kevin Till

Amanda documentation: http://wiki.zmanda.com
Amanda forums:http://forums.zmanda.com


Re: Writing to tape performance issue?

2007-01-05 Thread Jon LaBadie
On Fri, Jan 05, 2007 at 03:53:32PM -0500, Mark Hennessy wrote:
> Many thanks for the answers to my previous questions.  I'm just trying to
> iron out a few minor issues with my backup cycle at this point.
> 
> It would appear that based on the "Tape Time" value provided in the report
> that the rate at which data are written out to the tape is 3.7 MB/s.  The
> drive is rated by the manufacturer at 11 MB/s or 22 MB/s, so it would seem

Assuming you are using software compression, the 11 MB/sec is the
figure of interest.


> that 3.7 MB/s is a bit slow, yes?  Is "Tape Time" solely the time that Amanda
> is writing dumps directly to tape?  Does that include other interactions that
> would add a substantial amount of time as well?
> 
> I'm using Amanda 2.5.1p2 on FreeBSD 6.1 and a Quantum SDLT220 drive.
> 
> STATISTICS:
>   Total   Full  Incr.
...
> 
> Tape Time (hrs:min)5:24   1:07   4:17
> Tape Size (meg) 72005.836674.135331.8
> Tape Used (%)  71.0   36.2   34.9
> Filesystems Taped60 11 49
> Avg Tp Write Rate (k/s)  3793.3 9408.1 2342.3
> 

As I understand amanda's working, two processes are started
that server as "masters" for the dumpers and the taper procs.
The taper waits til there is something to tape (a completed
DLE on the holding disk for example) and forks of a child
taper process to actually do the taping.  The time that
child exists is what is recorded as "taping time".

Assuming the DLE's being taped are coming from a holding
disk, I would expect the speed of taping to be similar
for full and incremental dumps.  There is some overhead,
and that would be more significant for the smaller
incrementals, thus a small difference is to be expected.
But your difference in rate 9.4 vs 2.3 MB/sec is very
large.  Thus it seems no holding disk is involved.
Do you expect you are using a holding disk?

If not, here is what I think is happening.  Your dumps
are skipping the holding disk, going "direct to tape".
I.e. the dumper for each DLE connects to a child taper
process via a pipe.  Now the taping rate is dependent
upon the dumping rate as well as other factors.

Incremental dumping rates are generally slower than
level 0 dump rates.  Perhaps more overhead in jumping
around getting individual files than in getting all
the files in a directory.  That extra overhead probably
causes the pipe feeding the child taper to go empty
sometimes and thus taping stops.  But time continues
to accumulate and the total tapeing rate drops.

Note your level 0 taping rate, 9.4 MB/sec is reasonably
close to your tape drive's rated 11 MB/sec.  Less dumper
overhead keeps the pipe to the taper filled better.
Thus fewer or no stops in taping.

Why aren't you using a holding disk?

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


Re: Writing to tape performance issue?

2007-01-05 Thread Greg Troxel
Look at the "Avg Tp Write Rate'.  I see 9.4 MB/s for full dumps, and
2.3 for incrementals.  All of these should be from the holding disk,
but if any are direct to tape it will kill your tape speed.  Look at
the detailed logs to see if you had a really large single filesystem
incremental that is bigger than your holding disk, or something like
that.

-- 
Greg Troxel <[EMAIL PROTECTED]>


Writing to tape performance issue?

2007-01-05 Thread Mark Hennessy
Many thanks for the answers to my previous questions.  I'm just trying to
iron out a few minor issues with my backup cycle at this point.

It would appear that based on the "Tape Time" value provided in the report
that the rate at which data are written out to the tape is 3.7 MB/s.  The
drive is rated by the manufacturer at 11 MB/s or 22 MB/s, so it would seem
that 3.7 MB/s is a bit slow, yes?  Is "Tape Time" solely the time that Amanda
is writing dumps directly to tape?  Does that include other interactions that
would add a substantial amount of time as well?

I'm using Amanda 2.5.1p2 on FreeBSD 6.1 and a Quantum SDLT220 drive.

STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min)2:03
Run Time (hrs:min) 8:24
Dump Time (hrs:min)8:55   3:03   5:52
Output Size (meg)   72003.236673.635329.6
Original Size (meg) 72003.136673.635329.5
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped   60 11 49   (1:49)
Avg Dump Rate (k/s)  2298.6 3425.6 1713.4

Tape Time (hrs:min)5:24   1:07   4:17
Tape Size (meg) 72005.836674.135331.8
Tape Used (%)  71.0   36.2   34.9   (level:#disks ...)
Filesystems Taped60 11 49   (1:49)

Chunks Taped  0  0  0
Avg Tp Write Rate (k/s)  3793.3 9408.1 2342.3

USAGE BY TAPE:
  Label   Time  Size  %NbNc
  VOL13   5:2472006M   71.060 0

--
 Mark Hennessy



Sendsize Timeout Errors

2007-01-05 Thread Sean Connors

Hi all,

My first post here. I have perused all resources I know of to answer 
this question, but results have been very limited. Forgive me if this 
question has been posed and answer prior, but I cannot find the answer.


My issue is a sendsize timeout error. When it happens, amstatus shows 
the final filesystem as "getting estimate", and it'll hang there for 
days. The only error comes out of the amandad.xxx log (in /tmp/amanda), 
and it is:


>
amandad: time 21599.605: /usr/local/libexec/sendsize timed out waiting 
for REP data

amandad: time 21599.605: sending NAK pkt:
<
ERROR timeout on reply pipe
>
amandad: time 21605.615: pid 17898 finish time Thu Jan 4 20:40:06 2007

That's it. All other logs basically say OK to everything. Does anyone 
know anything about this? Is this something that has been seen before?


My environment is:
Two servers, one is the Amanda server, and one is the Amanda client. 
Workstations are not backed up.
Attached to the Amanda server is an Apple X-Serve  RAID, RAID 5, and 
largest partitions are 250GB ea.

The greatest amount of data on a single partition is 100GB
Note that the failure only began when I installed the Apple X-Serve 
RAID, and began using very large partitions. I am leaning toward an 
issue with gtar trying to calculate backup size on 100+GB worth of data.


Amanda Server:
SunFire V240
Solari 8 (patched to 02/06)
Amanda 2.5.0 (presently - error exists up to 2.5.1p2)
gtar (dumper in use) is version 1.13.1

Dumptype uses:
GNUTAR
compress server fast (client fast for client dumptype)
holdingdisk yes

Presently, I am testing this config because of what I think may be a 
gtar issue. As yet I have no data:

Dumptype:

compress none
estimate calcsize

Anything anyone can contribute to this issue will be greatly appreciated.

Sean

Sean Connors
Systems Administrator
ArgonST



Stand-by amanda server

2007-01-05 Thread Jon LaBadie
Any one ever set up a second system, probably one of
the regular amanda clients, as a stand-by amanda server?

With the use of virtual tapes and external storage,
either from USB drives or a file server, it would
seem to be a fairly straight forward task.

Here is the type of situation I'm envisioning.
My server is on Fedora, using USB drives dedicated
as my 64 vtapes.  I have a client running SuSE.
I may want to upgrade the Fedora system and shake
things out for a week or so but not stop backups
during that time.

So, set up similar file system structures on the
two boxes for the amanda config and the vtape
mount points.  Do an rsync command(s) to update
the shadow directories on the client.  Then
move the disks from the server to the client
and activate some crontab entries.

Doable?  Any gotcha's?
Anyone care to share their experience doing it?

-- 
Jon H. LaBadie  [EMAIL PROTECTED]
 JG Computing
 4455 Province Line Road(609) 252-0159
 Princeton, NJ  08540-4322  (609) 683-7220 (fax)


A couple of questions about dump statistics for Amanda 2.5.1p2

2007-01-05 Thread Mark Hennessy
1. Where can I find detailed documentation on what each of the values
provided in the statistics means and if/how they should add up?

2. When Amanda does its estimates for individual file systems being backed up
with dump , how does it do it?  
Does it do the same procedure for L0 dumps as for incremental dumps?  Is
there any way to get any time savings if there's a full L0 dump being done as
opposed to an incremental dump?

STATISTICS:
  Total   Full  Incr.
      
Estimate Time (hrs:min)2:01
Run Time (hrs:min)10:22
Dump Time (hrs:min)4:44   2:49   1:55
Output Size (meg)   58030.438691.519338.9
Original Size (meg) 48637.938691.5 9946.4
Avg Compressed Size (%) -- -- --(level:#disks ...)
Filesystems Dumped   59 12 47   (1:46 2:1)
Avg Dump Rate (k/s)  3483.8 3903.2 2867.4

Tape Time (hrs:min)1:49   1:19   0:30
Tape Size (meg) 53214.738692.014522.7
Tape Used (%)  52.5   38.1   14.3   (level:#disks ...)
Filesystems Taped60 12 48   (1:47 2:1)

Chunks Taped  0  0  0
Avg Tp Write Rate (k/s)  8367.6 8392.1 8302.9

--
 Mark Hennessy



Re: Is there a way to force a amdump - Monthly Backup at end of day from the command line.

2007-01-05 Thread Joshua Baker-LePain

On Fri, 5 Jan 2007 at 12:20pm, Chuck Amadi Systems Administrator wrote


I have a scare yesterday with our raid 5 a raid disk died.
Hence I have ordered a hot spare and an additional raid disk But I
need to do a full amanda backup prior to tar'ing up my file system.

Thus I currently run a daily increment backup and a last day on a friday
full amanda backup.

Is there a way to force a amdump - Monthly Backup at end of day from the
command line.


I don't fully understand your setup, but there's a couple of ways to force 
a full backup:


1) In your normal 'daily' setup, run 'amadmin $CONFIG force' for each DLE.

2) Copy your daily config's amanda.conf and disklist into a new config
   (e.g. Archive) and change your dumpcycle to 0 in the new config.  Then
   run that config.

--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University


Is there a way to force a amdump - Monthly Backup at end of day from the command line.

2007-01-05 Thread Chuck Amadi Systems Administrator
Hi List

I have a scare yesterday with our raid 5 a raid disk died.
Hence I have ordered a hot spare and an additional raid disk But I
need to do a full amanda backup prior to tar'ing up my file system.

Thus I currently run a daily increment backup and a last day on a friday
full amanda backup.

Is there a way to force a amdump - Monthly Backup at end of day from the
command line.

Cheers


-- 
Unix/ Linux Systems Administrator
Chuck Amadi
The Surgical Material Testing Laboratory (SMTL), 
Princess of Wales Hospital 
Coity Road 
Bridgend, 
United Kingdom, CF31 1RQ.
Email chuck.smtl.co.uk
Tel: +44 1656 752820 
Fax: +44 1656 752830