At 9:43 PM -0600 11/12/09, Frank Smith wrote:
Deb Baddorf wrote:
I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2. The parallelism I desire is across different clients, not so much
on the same client.

Taking the same configuration (editted) to a new machine and new tape changer
robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32767
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32767
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

-----------------
my configs:
------------------
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser "operator"
inparallel 10
dumporder "BTBTBTBTBT"
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 20000 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer "changer"

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger "chg-zd-mtx"  # the tape-changer glue script

reserve 02 # percent
autoflush yes
#====================================================#


define tapetype SDLT320 {
     comment "HP Super DLTtape I, data cartridge, C7980A, compression on"
     length 139776 mbytes
     filemark 0 kbytes
     speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment "Quantum DLT4000 or DLT7000 writing DLT4000 format, with DLTtape IV uncompr
essed"
    length 40000 mbytes
    filemark 8 kbytes
    speed 1500 kps
    lbl-templ "/usr/local/etc/amanda/3hole.ps"
}


#==============================================================#
define dumptype BDglobal {
    comment "Global definitions"
    index yes
    priority medium
    compress client fast
}
define dumptype BDnormal {
    BDglobal
    record yes
}

----------------------
-----------------------
#  DAILY configuration
#=============================#

includefile "/usr/local/etc/amanda/configMAIN.include"

#=============================#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev "/dev/changer"
tapedev "tape:/dev/nst2"    # the no-rewind tape device to be used
#   nst2  = changer 0       top unit,  where 2/3 of daily tapes are

changerfile "/usr/local/etc/amanda/chg-daily-42"   #my config data

tapetype SDLT320        # what kind of tape
                #  (see tapetypes in  ../configMAIN.include)

#====================================================#
holdingdisk hd1 {
    comment "main holding disk"
    directory "/spool/amanda/daily"   # where the holding disk is
    use -100 Mb         # how much space can we use on it
                        # a non-positive value means:
                        #        use all space but that value
# 20Gb was causing a perl glitch: - values in flush
    chunksize 2000Mb    # size of chunk if you want big dump to be
                        # dumped on multiple files on holding disks
                        #  N Kb/Mb/Gb split images in chunks of size N
                        #             The maximum value should be
                        #             (MAX_FILE_SIZE - 1Mb)
                        #  0          same as INT_MAX bytes
    }
holdingdisk hd2 {
    directory "/spool2/amanda/daily"
    use -100 Mb
    chunksize 2000Mb    # size of chunk if you want big dump to be
                        # dumped on multiple files on holding disks
                        #  N Kb/Mb/Gb split images in chunks of size N
                        #             The maximum value should be
                        #             (MAX_FILE_SIZE - 1Mb)
                        #  0          same as INT_MAX bytes
    }

#====================================================#
define dumptype dailyNormal {
    BDnormal
}

define dumptype dailyNormalFast {
    BDnormal
    maxdumps 3
}

The report and debug files on the server may provide more clues,
but for starters I would verify that you actually have adequate
holdingdisk space (check quotas as well as just a df). Too little
space will cause the dumps to be serially written to tape.

Frank


No disk quotas,  and I've got twice as much holding disk space as
on the older machine.    Anybody know of any make config parameters
related to this?   We've got 50 or so ports allocated, so even that
shouldn't be the problem.

MAXDUMPS:   I guess I'm not setting that at all,  except in one or
two dumptypes -- which I'm not using these days.   But it's the
cross-node  parallelism that I want,  and I'm not seeing!

Deb

Reply via email to