RE: TCP Tuning

2009-11-12 Thread Alan Griffiths

I have tested with a larger sample (40G) and the results were the same.



There is a single route between the hosts so all traffic traverses it.
There appears to be no resource issue on the backup client
iostat/vmstat reports low utilisation.



netstat reports not errors - but this would impact NFS as well.



Thanks,



Alan


 Date: Thu, 29 Oct 2009 16:09:16 -0400
 From: martin...@zmanda.com
 To: ap_griffi...@hotmail.com
 CC: dus...@zmanda.com; amanda-users@amanda.org
 Subject: Re: TCP Tuning

 Alan,

 Most people report faster throughput without nfs, your result is weird.

 Your sample is small (1.5GB), is it possible it was cached by the nfs
 client?
 Do amanda and NFS use the same network route?
 Do the NFS server have enough memory and cpu to run the amanda client
 software?
 Check the network cards, do they report error?

 Jean-Louis

 Alan Griffiths wrote:
 This time with files *actually* attached!

 

 From: ap_griffi...@hotmail.com
 To: martin...@zmanda.com
 CC: dus...@zmanda.com; amanda-users@amanda.org
 Subject: RE: TCP Tuning
 Date: Thu, 22 Oct 2009 17:14:33 +0100


 Just one dle.
 No compression - data is already compressed (gzip).
 No encryption.
 I am using holding disk.

 Attached files: -

 amdump.1 direct from client
 amdump.3 through NFS.

  
_
Access your other email accounts and manage all your email from one place.
http://clk.atdmt.com/UKM/go/167688463/direct/01/

Inparallel - not paralleling

2009-11-12 Thread Deb Baddorf

I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2.  The parallelism I desire is across different clients,   not so much
on the same client.

Taking the same configuration (editted)   to a new machine and new tape changer
robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32767
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 32767
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

-
my configs:
--
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser operator
inparallel 10
dumporder BTBTBTBTBT
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 2 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer changer

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger chg-zd-mtx  # the tape-changer glue script

reserve 02 # percent
autoflush yes
##


define tapetype SDLT320 {
 comment HP Super DLTtape I, data cartridge, C7980A, compression on
 length 139776 mbytes
 filemark 0 kbytes
 speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment Quantum DLT4000 or DLT7000 writing DLT4000 format, with 
DLTtape IV uncompr

essed
length 4 mbytes
filemark 8 kbytes
speed 1500 kps
lbl-templ /usr/local/etc/amanda/3hole.ps
}


#==#
define dumptype BDglobal {
comment Global definitions
index yes
priority medium
compress client fast
}
define dumptype BDnormal {
BDglobal
record yes
}

--
---
#  DAILY configuration
#=#

includefile /usr/local/etc/amanda/configMAIN.include

#=#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev /dev/changer
tapedev tape:/dev/nst2  # the no-rewind tape device to be used
#   nst2  = changer 0   top unit,  where 2/3 of daily tapes are

changerfile /usr/local/etc/amanda/chg-daily-42   #my config data

tapetype SDLT320# what kind of tape
#  (see tapetypes in  ../configMAIN.include)

##
holdingdisk hd1 {
comment main holding disk
directory /spool/amanda/daily   # where the holding disk is
use -100 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
   # 20Gb was causing a perl glitch: - values in flush
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}
holdingdisk hd2 {
directory /spool2/amanda/daily
use -100 Mb
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

##
define dumptype dailyNormal {
BDnormal
}

define dumptype dailyNormalFast {
BDnormal
maxdumps 3
}



Re: Inparallel - not paralleling

2009-11-12 Thread Frank Smith

Deb Baddorf wrote:

I have a 6 yr old amanda config which has been very nicely using up to
10 parallel dumpers.   I've got 32 client nodes,  so I have  MAXDUMPS
set to 2.  The parallelism I desire is across different clients,   not 
so much

on the same client.

Taking the same configuration (editted)   to a new machine and new tape 
changer

robot,   I've still got INPARALLEL  set to 10 ...   but no parallelism is
occurring  (in test runs of only 7 client nodes).   What am I missing --
why is my new setup not using multiple dumpers?   Seven clients ought
to be enough to cause parallelism.

Deb Baddorf
Fermilab


Oh -- the new server is Linux rather than FreeBSD,  so that's  another
difference.   But:
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 32767
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 32767
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

-
my configs:
--
#
# configMAIN.include - Amanda configuration global definitions
#
#
dumpuser operator
inparallel 10
dumporder BTBTBTBTBT
taperalgo largestfit

flush-threshold-dumped 70
flush-threshold-scheduled 100
taperflush 0
netusage  1800 Kbps
bumpsize 2 Mb
bumpdays 1
bumppercent 20
bumpmult 4

maxdumpsize -1
amrecover_do_fsf yes
amrecover_check_label yes
amrecover_changer changer

etimeout 2000
dtimeout 1800
ctimeout 30

tapebufs 20
tpchanger chg-zd-mtx  # the tape-changer glue script

reserve 02 # percent
autoflush yes
##


define tapetype SDLT320 {
 comment HP Super DLTtape I, data cartridge, C7980A, compression on
 length 139776 mbytes
 filemark 0 kbytes
 speed 13980 kps
}


#old 30-tape stacker.   Keep in case needed to read older tapes.
define tapetype DLT4000-IV {
comment Quantum DLT4000 or DLT7000 writing DLT4000 format, with 
DLTtape IV uncompr

essed
length 4 mbytes
filemark 8 kbytes
speed 1500 kps
lbl-templ /usr/local/etc/amanda/3hole.ps
}


#==#
define dumptype BDglobal {
comment Global definitions
index yes
priority medium
compress client fast
}
define dumptype BDnormal {
BDglobal
record yes
}

--
---
#  DAILY configuration
#=#

includefile /usr/local/etc/amanda/configMAIN.include

#=#


dumpcycle 7 days
runspercycle 5
runtapes 5
tapecycle 100 tapes

# 42-stacker unit:   # 11/01/09
changerdev /dev/changer
tapedev tape:/dev/nst2# the no-rewind tape device to be used
#   nst2  = changer 0   top unit,  where 2/3 of daily tapes are

changerfile /usr/local/etc/amanda/chg-daily-42   #my config data

tapetype SDLT320# what kind of tape
#  (see tapetypes in  ../configMAIN.include)

##
holdingdisk hd1 {
comment main holding disk
directory /spool/amanda/daily   # where the holding disk is
use -100 Mb # how much space can we use on it
# a non-positive value means:
#use all space but that value
   # 20Gb was causing a perl glitch: - values in 
flush

chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}
holdingdisk hd2 {
directory /spool2/amanda/daily
use -100 Mb
chunksize 2000Mb# size of chunk if you want big dump to be
# dumped on multiple files on holding disks
#  N Kb/Mb/Gb split images in chunks of size N
# The maximum value should be
# (MAX_FILE_SIZE - 1Mb)
#  0  same as INT_MAX bytes
}

##
define dumptype dailyNormal {
BDnormal
}

define dumptype dailyNormalFast {
BDnormal
maxdumps 3
}


The report and debug files on the server may provide more clues,
but for starters I would verify that you actually have adequate
holdingdisk space (check quotas as