Without holding disk, the dump are dump sequentially
With holding disk, the NAS bandwidth is probably saturated. (3x more transfert between the amanda server and the NAS)

You could upgrade to amanda-3.3.3 which allow to write to multiple vtape inparallel without using the holding disk.

I see in amdump.1: driver-idle: no-bandwidth
which means that amanda exceed the netusage configuration, your netusage is set to 8000, and you said you have a 1G card in the server, you could increase the netusage.

Anyway, you have one big dle that take most of the time:

SUCCESS dumper borneo /disk1 20130315175855 0 [sec 31897.073 kb 483653759 kps 15163.0 orig-kb 662646150] SUCCESS chunker borneo /disk1 20130315175855 0 [sec 31902.234 kb 483653759 kps 15160.5] STATS driver estimate borneo /disk1 20130315175855 0 [sec 33019 nkb 662646182 ckb 431408768 kps 13065] PART taper DailySet3 36 borneo /disk1 20130315175855 1/1 0 [sec 9336.044292 kb 483653758 kps 51804.998260] DONE taper borneo /disk1 20130315175855 1 0 [sec 9336.044292 kb 483653758 kps 51804.998260]

the dump to holding disk is faster than then writing to tape. so the problem is not the NAS bandwidth.
Problem for this dle is one or more of:
  -  the disk is not fast enough to create the backup faster.
- CPU, you are doing compression which require a lot of CPU. Did you monitor the CPU utilization of the gzip process?
  - network speed.

You could split that big dle in multiple small dle.

Jean-Louis

On 03/18/2013 12:52 AM, Amit Karpe wrote:
Jean-Louis,
Yes I realized that I should shared logfile & amstatus output.
So I am sharing mail report & log files.

http://pastebin.com/03BwXCsi
http://pastebin.com/EKcnhW88
https://www.dropbox.com/sh/qhh16izq5z43iqj/eQm3FVe_kH/amanda_backup_logs


Using /etc/amanda/DailySet2/logs/amdump.1
From Fri Mar 15 17:58:55 SGT 2013

bali:/boot        1         0g finished (18:02:45)
bali:/etc         1         0g finished (18:03:02)
bali:/home        1         0g finished (18:03:27)
bali:/root        1         0g finished (18:03:33)
bali:/var         0         4g finished (18:38:52)
bengkulu:/boot    1         0g finished (18:03:01)
bengkulu:/etc     1         0g finished (18:03:15)
bengkulu:/home    1         0g finished (18:03:34)
bengkulu:/root    1         0g finished (18:02:45)
bengkulu:/var     0         8g finished (18:52:02)
borneo:/boot      1         0g finished (18:03:01)
borneo:/disk1     0       461g finished (6:49:55)
borneo:/disk2     1         1g finished (18:28:18)
borneo:/disk3     1         2g finished (18:08:15)
borneo:/disk4     0        17g finished (19:28:45)
borneo:/disk5     1         0g finished (18:04:10)
borneo:/disk6     1         0g finished (18:08:26)
borneo:/etc       1         0g finished (18:03:50)
borneo:/home      1         0g finished (18:03:48)
borneo:/root      1         0g finished (18:04:31)
borneo:/var       0         0g finished (18:11:52)
eos:/boot         1         0g finished (18:02:45)
eos:/etc          1         0g finished (18:03:01)
eos:/home/wovodat 1         0g finished (18:03:54)
eos:/root         1         0g finished (18:03:14)
eos:/var          0         0g finished (18:26:01)
mukomuko:/boot    1         0g finished (18:26:01)
mukomuko:/etc     1         0g finished (18:02:45)
mukomuko:/home    0         0g finished (18:29:11)
mukomuko:/root    1         0g finished (18:03:02)
mukomuko:/var     1         0g finished (18:03:32)
phuket:/boot      1         0g finished (18:26:01)
phuket:/etc       1         0g finished (18:03:22)
phuket:/home      0         0g finished (18:30:19)
phuket:/root      1         0g finished (18:02:45)
phuket:/var       1         0g finished (18:03:07)

SUMMARY          part      real  estimated
                           size       size
partition       :  36
estimated       :  36                  448g
flush           :   0         0g
failed          :   0                    0g           (  0.00%)
wait for dumping:   0                    0g           (  0.00%)
dumping to tape :   0                    0g           (  0.00%)
dumping         :   0         0g         0g (  0.00%) (  0.00%)
dumped          :  36       498g       448g (111.06%) (111.06%)
wait for writing:   0         0g         0g (  0.00%) (  0.00%)
wait to flush   :   0         0g         0g (100.00%) (  0.00%)
writing to tape :   0         0g         0g (  0.00%) (  0.00%)
failed to tape  :   0         0g         0g (  0.00%) (  0.00%)
taped           :  36       498g       448g (111.04%) (111.06%)
30 dumpers idle : runq
taper idle
chunker0 busy   :  9:53:21  ( 76.95%)
chunker1 busy   :  0:04:04  (  0.53%)
chunker2 busy   :  0:00:58  (  0.13%)
chunker3 busy   :  0:04:39  (  0.60%)
chunker4 busy   :  0:22:20  (  2.90%)
chunker5 busy   :  0:00:18  (  0.04%)
 dumper0 busy   :  9:53:18  ( 76.94%)
 dumper1 busy   :  0:04:04  (  0.53%)
 dumper2 busy   :  0:00:58  (  0.13%)
 dumper3 busy   :  0:04:39  (  0.60%)
 dumper4 busy   :  0:22:20  (  2.90%)
 dumper5 busy   :  0:00:18  (  0.04%)
   taper busy   :  2:48:32  ( 21.86%)
0 dumpers busy : 2:51:24 ( 22.23%) runq: 2:51:24 (100.00%) 1 dumper busy : 9:51:17 ( 76.68%) runq: 9:51:17 (100.00%) 2 dumpers busy : 0:03:56 ( 0.51%) runq: 0:03:56 (100.00%) 3 dumpers busy : 0:04:00 ( 0.52%) runq: 0:04:00 (100.00%) 4 dumpers busy : 0:00:25 ( 0.06%) runq: 0:00:25 (100.00%) 5 dumpers busy : 0:00:02 ( 0.01%) runq: 0:00:02 (100.00%) 6 dumpers busy : 0:00:01 ( 0.00%) runq: 0:00:01 (100.00%)

On Fri, Mar 15, 2013 at 8:15 PM, Jean-Louis Martineau <martin...@zmanda.com <mailto:martin...@zmanda.com>> wrote:

    On 03/15/2013 12:11 AM, Amit Karpe wrote:


        I did not able to observe parallel processing. I can see only
        one dumping at a time:
        -bash-4.0$ amstatus DailySet2  | grep dumping
        bengkulu:/var  0         8g dumping        6g ( 73.75%) (11:52:57)
        wait for dumping:   0                    0g           (  0.00%)
        dumping to tape :   0                    0g           (  0.00%)
        dumping         :   1         6g         8g ( 73.75%) ( 18.47%)
        -bash-4.0$


    amstatus have so much more information, can you post the complete
    output or better, post the amdump.X file.
    Can you also post the email report or the log.<datastamp>.0 file.

    You posted a lot of number about your hardware and you said you
    monitor it, but you never said how much you are close to the
    hardware limit.
    You posted no number about amanda performance (except total time
    and size) and which number you think can be improved.

    Jean-Louis




--
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/

Reply via email to