Re: amanda not dumping in parallel?

2007-10-03 Thread Jean-Louis Martineau

What's the maxdumps setting?

Jean-Louis

Paul Lussier wrote:

Hi all,

I recently changed my disklist such that all DLEs which pertain to my
NFS appliance have a -1 spindle entry.  My understanding of the man
page was that a -1 spindle setting for a set of DLEs on the same host
means they would be backed up in parallel.

I've checked with amadmin that the configuration change for these DLEs
reflect a -1 spindle, and I have 'inparallel' set to 16.  Yet when I
look at the process table for the host in question, amstatus tells me
that all the DLEs are 'getting estimate', yet there's only single tar
process running for the estimate phase.

Did I do something wrong, or are estimates run sequentially and only
once the estimates are in are the actual dumps performed in parallel?

Thanks.

  




Re: amanda not dumping in parallel?

2007-10-03 Thread Chris Hoogendyk



Jean-Louis Martineau wrote:

What's the maxdumps setting?

Jean-Louis

Paul Lussier wrote:

Hi all,

I recently changed my disklist such that all DLEs which pertain to my
NFS appliance have a -1 spindle entry.  My understanding of the man
page was that a -1 spindle setting for a set of DLEs on the same host
means they would be backed up in parallel.

I've checked with amadmin that the configuration change for these DLEs
reflect a -1 spindle, and I have 'inparallel' set to 16.  Yet when I
look at the process table for the host in question, amstatus tells me
that all the DLEs are 'getting estimate', yet there's only single tar
process running for the estimate phase.

Did I do something wrong, or are estimates run sequentially and only
once the estimates are in are the actual dumps performed in parallel? 



Could you post your config file?

There are a couple of things that could cause this. One example would be 
if you don't have a holding disk. If you are going direct to tape, then 
it won't dump in parallel. If that is your configuration, it could also 
contribute to your speed issues in other ways, for example causing 
"shoe-shining" on your LTOs, which would slow things down more. I don't 
think I've seen an answer on that question yet.


There could also be issues with what partitions or spindles things are 
mounted on, and contention from that perspective (referring to the speed 
issue).



---

Chris Hoogendyk

-
  O__   Systems Administrator
 c/ /'_ --- Biology & Geology Departments
(*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst 


<[EMAIL PROTECTED]>

--- 


Erdös 4




Re: amanda not dumping in parallel?

2007-10-03 Thread Paul Lussier
Jean-Louis Martineau <[EMAIL PROTECTED]> writes:

> What's the maxdumps setting?

DOH!  I don't actually have that one set, so it's defaulting to 1 :(
-- 
Thanks,
Paul


Re: amanda not dumping in parallel?

2007-10-03 Thread Paul Lussier
Chris Hoogendyk <[EMAIL PROTECTED]> writes:

> Could you post your config file?

Sure, no problem.  See below.

> There are a couple of things that could cause this. One example would
> be if you don't have a holding disk.

Nope, I've got a 2TB holding disk.

> If you are going direct to tape, then it won't dump in parallel

Right, actual dumps seem to happen in parallel, just not the
estimates.  Which I think might be the maxdumps setting Jean-Louis
pointed out.  I for some reason had overlooked that setting and it was
using the default.

> If that is your configuration, it could also contribute to your
> speed issues in other ways, for example causing "shoe-shining" on
> your LTOs, which would slow things down more. I don't think I've
> seen an answer on that question yet.

I think there's more going on with that than just amanda performance.
I think this server is completely mis-configured, I think our network
is probably suffering from the same misconfiguration, as is the NAS
we're trying to back.  All three were put together by the same person
who has since left.  I'm inheriting multiple messes which impact each
other significantly and it's impossible to tell the root cause of each
of the various problems.

> There could also be issues with what partitions or spindles things are
> mounted on, and contention from that perspective (referring to the
> speed issue).

Everything there is on a NAS, so technically everything is striped out
over N drives in a RAID5 array.  I was thinking that you'd have
everything virtually on the same "spindle" in this case, but then it
was pointed out that we have 300 other systems NFS mounting from this
NAS.  So, if one host can't read from all file systems on the NAS
simultaneously, it's more likely a problem with that host than it is
with the NAS.

I'm getting very frustrated because I just want to rip it all apart
and do it right, but we "don't have the time for that".  Grrr.

-- 
Thanks,
Paul

Here's my config:

org ""   # Subject line prefix for reports.
mailto "[EMAIL PROTECTED]" # space separated list of recipients.

dumpuser "backup"   # user to run dumps under

maxdumps   16   # The maximum number of backups from a single host\
# that Amanda will attempt to run in parallel.

inparallel 32   # maximum dumpers that will run in parallel (max 63)
# within the constraints of network bandwidth
# and holding disk space available

displayunit "g" # Possible values: "k|m|g|t"
# Default: k. 
# The unit used to print many numbers.
# k=kilo, m=mega, g=giga, t=tera

netusage  1024 mbps # maximum net bandwidth for Amanda, in KB per sec


dumpcycle7  # the number of days in the normal dump cycle
runtapes 4  # number of tapes to be used in a single run of amdump

tapecycle   10 tapes# the number of tapes in rotation
# dumpcycle * runtapes * 6

bumpsize20 Gb   # minimum savings (threshold) to bump level 1 -> 2
bumppercent  0  # minimum savings (threshold) to bump level 1 -> 2
bumpdays 1  # minimum days at each level
bumpmult 1.5# threshold = bumpsize * bumpmult^(level-1)

etimeout  10800  # number of seconds per filesystem for estimates.
dtimeout  7200  # number of idle seconds before a dump is aborted.
ctimeout30  # maximum number of seconds that amcheck waits
# for each client host
usetimestamps true 
labelstr "^S[0-9][0-9]-T[0-9][0-9]$"

tapebufs 40 # A positive integer telling taper how many
# 32k buffers to allocate.  WARNING! If this
# is set too high, taper will not be able to
# allocate the memory and will die.  The
# default is 20 (640k).

tpchanger "chg-zd-mtx"  # the tape-changer glue script
tapedev "/dev/nst1" # the no-rewind tape device to be used
changerfile "/etc/amanda/offsite/overland-mtx"
changerdev "/dev/sg1"

maxdumpsize -1   # Maximum number of bytes the planner will
 # schedule for a run 
 # (default: runtapes * tape_length).

amrecover_do_fsf yes # amrecover will call amrestore with the
 # -f flag for faster positioning of the tape.
amrecover_check_label yes# amrecover will call amrestore with the
 # -l flag to check the label.
amrecover_changer "changer"  # amrecover will use the changer if you restore
 # from this device: amrecover -d changer

holdingdisk hd1 {
comment "main holding disk"
directory "/backups/amanda/offsite" # where the holding disk is
use 1700 Gb # how much space can we use on it