Re: Is my File system corrupt ?

2013-03-20 Thread Amit Karpe
Update:
1. I have full access for NAS by using ssh. Will do fsck only on
Friday-Sunday.
2. I don't see any filesystem/binary  corruption by just backing checking.
3. I have reduce inparallel  max dump number and testing now.

On Wed, Mar 20, 2013 at 5:43 AM, Heiko Schlittermann 
h...@schlittermann.dewrote:

 Amit Karpe amitka...@gmail.com (Di 19 Mär 2013 06:14:59 CET):
 
  Just now I have check manuals  now I have full access to this  NAS
 system.
  By using QNAP web interface I found that SMART Information for
  all Physical Disks is Good. Still investigating what can be the problem.

 You can switch on SSH access to the NAS and try to run fsck on the file
 system. - Just to make sure that the file system is not corrupted.

 Best regards from Dresden/Germany
 Viele Grüße aus Dresden
 Heiko Schlittermann
 --
  SCHLITTERMANN.de  internet  unix support -
  Heiko Schlittermann, Dipl.-Ing. (TU) - {fon,fax}: +49.351.802998{1,3} -
  gnupg encrypted messages are welcome --- key ID: 7CBF764A -
  gnupg fingerprint: 9288 F17D BBF9 9625 5ABC  285C 26A9 687E 7CBF 764A -
 (gnupg fingerprint: 3061 CFBF 2D88 F034 E8D2  7E92 EE4E AC98 48D0 359B)-




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-19 Thread Amit Karpe
On Mon, Mar 18, 2013 at 10:44 PM, Gene Heskett ghesk...@wdtv.com wrote:

 On Monday 18 March 2013 10:06:59 Amit Karpe did opine:

  Reply in-line:
 
  On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote:
   On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote:
Brian,
Yes, initially I was using holdingdisk for all config. But when I
see backup is low and taking more time, I had doubt that my backup
 holdingdisk are on same disk (NAS) which is 19TB. So I change it
 force not to use it.
   
But now I realized its required so I will use for next usage of
Amanda.
   
Could someone let me know use of chunksize ?
It should big enough like 10GB or 50GB ?
As I am using Vtape, finally all these temporary files are going
to merge into one file. So why should we create these chunks ? Why
not
  
   let
  
dumper directly dump into final vtape's slot directory ?
  
   The holding disk chunksize was added to overcome the 2GB max file size
   limitation of some filesystems.  It is also useful if you allocate
   multiple holding disks, some which may not be big enough for your
   large DLEs.
  
  
   The parts of amanda doing the dumps do not peek to see where the data
   will eventually be stored.  Taper does that part and it is not called
   until a DLE is successfully dumped.  If you are going direct to tape
   taper is receiving the dump directly but then you can only do one DLE
   at a time.
  
   Jon
 
  So using  chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda
  to run dumper parallely ?

 There are, generally speaking, several aspects of doing backups in
 parallel.

 1. If you don't want your disk(s) to be thrashed by seeking, and that of
 course has a speed penalty, you must restrict the read operations to one
 file at a time per disk spindle, this is in the docs, man disklist I
 believe.


Yes, I got the idea. Will try in next backup process.


 2. The chunk size is to get around some file system limits that often cause
 things to go all aglay when fast integer math in the filesystem falls over
 at 2Gb on a single file.  It has nothing or very little to do with speed
 other than the overhead of breaking it up during the writes to the holding
 disk area, then splicing it back together as its sent down the cable to the
 storage media.  IOW it is to keep your OS from trashing the file as its
 being put in the holding area as a merged file from the directory tree
 specified in the disklist.  IOW your file is likely not the problem
 unless that file is a dvd image, but the merged output of tar or (spit)
 dump, can easily be more than 2Gb.

 I have one directory in my /home that will almost certainly have to be as
 separate files as I have debian-testing for 3 different architectures here.
 That would be about 30 disklist entries all by itself as there are 30 dvd
 images for the whole thing.

 Yes, now I am using Holding disk with big  chunk size 100GB.

3. Parallelism would probably be helped, given that the data moving
 bandwidth is sufficient, if more than one holding disk area was allocated,
 with each allocation being on a separate spindle/disk so that the holding
 disk itself would not be subjected to this same seek thrashing time killing
 IF you also had more than one storage drive being written in parallel.  If
 only one tape is mounted at a time in your setup, once you've taken action
 against seek thrashing of the source disk(s), the next thing is improving
 the bandwidth.

 Will try to use 2 holding disk in next backup process. And will udpate my
result here.



 This last however, may not be something that amanda has learned how to use
 effectively as AFAIK, there is not an optional 'spindle' number in the
 holding disk entry for that distinction.

 I am Not sure.


 Cheers, Gene
 --
 There are four boxes to be used in defense of liberty:
  soap, ballot, jury, and ammo. Please use in that order.
 -Ed Howdershelt (Author)
 My web page: http://coyoteden.dyndns-free.com:85/gene is up!
 My views
 http://www.armchairpatriot.com/What%20Has%20America%20Become.shtml
 Engineering without management is art.
 -- Jeff Johnson
 I was taught to respect my elders, but its getting
 harder and harder to find any...




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-18 Thread Amit Karpe
Reply in-line:

On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote:
  Brian,
  Yes, initially I was using holdingdisk for all config. But when I see
  backup is low and taking more time, I had doubt that my backup 
  holdingdisk are on same disk (NAS) which is 19TB. So I change it  force
  not to use it.
 
  But now I realized its required so I will use for next usage of Amanda.
 
  Could someone let me know use of chunksize ?
  It should big enough like 10GB or 50GB ?
  As I am using Vtape, finally all these temporary files are going
  to merge into one file. So why should we create these chunks ? Why not
 let
  dumper directly dump into final vtape's slot directory ?
 
 The holding disk chunksize was added to overcome the 2GB max file size
 limitation of some filesystems.  It is also useful if you allocate
 multiple holding disks, some which may not be big enough for your
 large DLEs.


 The parts of amanda doing the dumps do not peek to see where the data
 will eventually be stored.  Taper does that part and it is not called
 until a DLE is successfully dumped.  If you are going direct to tape
 taper is receiving the dump directly but then you can only do one DLE
 at a time.

 Jon


So using  chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda to
run dumper parallely ?


-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-18 Thread Amit Karpe
On Mon, Mar 18, 2013 at 3:04 PM, C.Scheeder christ...@scheeder.de wrote:

 Hi Again,

 Am 18.03.2013 05:55, schrieb Amit Karpe:

  Brian,
 Yes, initially I was using holdingdisk for all config. But when I see
 backup is low and taking more time, I had doubt that my backup 
 holdingdisk are on same disk (NAS) which is 19TB. So I change it  force
 not to use it.


 That holdingdisk on the NAS won't buy you much speed improvement,
 with the holdingdisk on the nas, all your data will be sent 4 times over
 the network,
 1. from client to amanda server,
 2. from amanda server to nas holdingdisk,
 3. from nas holdingdisk back to amanda server
 4. from amanda-server back to your vtape storage.

 and voila, there is your network-bottleneck again limiting you to roughly
 max-Networkbandwith/4 MByte/sec for the dumps.

 you realy need physical disk(s) in your amanda-machine for holdingspace
 to speed things up.

 Christoph



Thank you Christoph. I will try to configure  arrange my backup server
with extra disk space.

-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Is my File system corrupt ?

2013-03-18 Thread Amit Karpe
+0x20d/0x413

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [810e359d] do_sync_write+0xe8/0x125

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [810116ce] ? common_interrupt+0xe/0x13

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [8105d6fb] ? autoremove_wake_function+0x0/0x39

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [811910f5] ? security_file_permission+0x16/0x18

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [810e3ba0] vfs_write+0xae/0x10b

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [810e3cbd] sys_write+0x4a/0x6e

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel: [81010cc2] system_call_fastpath+0x16/0x1b

Message from syslogd@borneo at Mar 18 23:02:11 ...
 kernel:Code: 89 e5 41 56 41 55 41 54 53 0f 1f 44 00 00 48 8b 47 18 48 89
fb 41 89 f6 48 8b 00 4c 8b 60 f8 4c 8d a8 58 fe ff ff 4d 85 e4 75 04 0f
0b eb fe 48 89 fe 4c 89 e7 e8 0a 5f fb ff 84 c0 74 13 31 c0

You have new mail in /var/spool/mail/root
[root@borneo data]# ls -lh /nasbackup/full/holdingdisk/2*
^C^C^C^C^C^C^C
^C^Cdev@monitor:/media/data/Downloads$ ssh root@borneo
root@borneo's password:
Last login: Tue Mar 19 10:46:39 2013 from daniel.eos.ntu.edu.sg
[root@borneo ~]# ls -lh /nasbackup/full/holdingdisk/
total 8.0K
drwxrwx---+ 2 amandabackup disk 4.0K 2013-03-19 09:05 20130318133816
[root@borneo ~]# ls -lh /nasbackup/full/holdingdisk/20130318133816


-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Is my File system corrupt ?

2013-03-18 Thread Amit Karpe
On Tue, Mar 19, 2013 at 11:51 AM, Amit Karpe amitka...@gmail.com wrote:

 Hi All,
 I am facing new problem with my NAS filesystem with particular directory.
 ( Till yesterday I was trying to improve speed of my backup.)
 File system on NAS is working well as I can access other directories and
 even I can copy files from NAS to there system.

 But I can't access this
 directory /nasbackup/full/holdingdisk/20130318133816
 If I fire ls command is just hand there  even I can't kill using kill
 command.

 My question Is my File system corrupt ? How I can cross check ?
 Regular fsck I can't use I don't have access to NAS command prompt.


Just now I have check manuals  now I have full access to this  NAS system.
By using QNAP web interface I found that SMART Information for
all Physical Disks is Good. Still investigating what can be the problem.



 I am pasting my screen output here:
 (Which show that it was working fine, then I received kernel error, not no
 response. Even my (amdump) dumping process get stuck.
 amdump file: http://pastebin.com/F1UEHJ51

 [root@borneo data]# ls -lh /nasbackup/full/holdingdisk/2*
 total 965G
 -rw---+ 1 amandabackup disk  50G 2013-03-18 15:59 bali._b.0
 -rw---+ 1 amandabackup disk  43G 2013-03-18 17:47 bali._b.0.1
 -rw---+ 1 amandabackup disk  44G 2013-03-18 19:08 borneo._disk1.0.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 17:00 borneo._disk2.0.1.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 18:28 borneo._disk2.0.2.tmp
 -rw---+ 1 amandabackup disk  26G 2013-03-18 19:08 borneo._disk2.0.3.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 15:08 borneo._disk2.0.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 17:34 borneo._disk3.0.1.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 19:08 borneo._disk3.0.2.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 15:55 borneo._disk3.0.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 18:56 borneo._disk5.0.1.tmp
 -rw---+ 1 amandabackup disk 6.3G 2013-03-18 19:08 borneo._disk5.0.2.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 17:13 borneo._disk5.0.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 16:40 borneo._disk6.0
 -rw---+ 1 amandabackup disk  50G 2013-03-18 18:19 borneo._disk6.0.1
 -rw---+ 1 amandabackup disk 1.1G 2013-03-18 18:24 borneo._disk6.0.2
 -rw---+ 1 amandabackup disk  21G 2013-03-18 19:08 eos._home.0.1.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 17:11 eos._home.0.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 16:22 phuket._a.0.1.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 17:33 phuket._a.0.2.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 18:38 phuket._a.0.3.tmp
 -rw---+ 1 amandabackup disk  26G 2013-03-18 19:08 phuket._a.0.4.tmp
 -rw---+ 1 amandabackup disk  50G 2013-03-18 15:03 phuket._a.0.tmp
 [root@borneo data]#
 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:[ cut here ]

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:invalid opcode:  [#2] SMP

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:last sysfs file:
 /sys/devices/pci:00/:00:1c.1/:04:00.0/:05:00.0/irq

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:Stack:

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: 8800c4aa57d8 a01dff70 e200039e2538
 e200039e2510

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: 8800c4aa57e8 810abf43 8800c4aa5938
 810b745e

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:Call Trace:

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [a01dff70] nfs_release_page+0x41/0x46 [nfs]

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810abf43] try_to_release_page+0x34/0x3d

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b745e] shrink_page_list+0x460/0x5e6

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b6428] ? list_add+0x11/0x13

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b6de3] ? isolate_pages_global+0xd1/0x203

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b7da9] shrink_list+0x2d4/0x621

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b3b5d] ? determine_dirtyable_memory+0x1a/0x2d

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b0836] ? __rmqueue_smallest+0xaf/0x12b

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b838b] shrink_zone+0x295/0x317

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810640b7] ? getnstimeofday+0x5b/0xaf

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b9485] do_try_to_free_pages+0x211/0x37d

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel: [810b96eb] try_to_free_pages+0x6e/0x70

 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel

Re: Amanda Performance

2013-03-17 Thread Amit Karpe
Brian,
Yes, initially I was using holdingdisk for all config. But when I see
backup is low and taking more time, I had doubt that my backup 
holdingdisk are on same disk (NAS) which is 19TB. So I change it  force
not to use it.

But now I realized its required so I will use for next usage of Amanda.

Could someone let me know use of chunksize ?
It should big enough like 10GB or 50GB ?
As I am using Vtape, finally all these temporary files are going
to merge into one file. So why should we create these chunks ? Why not let
dumper directly dump into final vtape's slot directory ?

On Fri, Mar 15, 2013 at 9:15 PM, Brian Cuttler br...@wadsworth.org wrote:


 Amit,

 Did I understand you to say that you are not using an amanda
 work area, an area on the server for temporary files?

 Brian

 On Fri, Mar 15, 2013 at 08:15:38AM -0400, Jean-Louis Martineau wrote:
  On 03/15/2013 12:11 AM, Amit Karpe wrote:
  
  I did not able to observe parallel processing. I can see only one
  dumping at a time:
  -bash-4.0$ amstatus DailySet2  | grep dumping
  bengkulu:/var  0 8g dumping6g ( 73.75%) (11:52:57)
  wait for dumping:   00g   (  0.00%)
  dumping to tape :   00g   (  0.00%)
  dumping :   1 6g 8g ( 73.75%) ( 18.47%)
  -bash-4.0$
 
  amstatus have so much more information, can you post the complete output
  or better, post the amdump.X file.
  Can you also post the email report or the log.datastamp.0 file.
 
  You posted a lot of number about your hardware and you said you monitor
  it, but you never said how much you are close to the hardware limit.
  You posted no number about amanda performance (except total time and
  size) and which number you think can be improved.
 
  Jean-Louis
 ---
Brian R Cuttler brian.cutt...@wadsworth.org
Computer Systems Support(v) 518 486-1697
Wadsworth Center(f) 518 473-6384
NYS Department of HealthHelp Desk 518 473-0773




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-17 Thread Amit Karpe
Christoph,
My reply's are inline:

On Fri, Mar 15, 2013 at 9:29 PM, C. Scheeder christ...@scheeder.de wrote:

 Hi,
 Summarizing up:
 your clients have 100Mbit-Nics,
 your server has a 1000Mbit-Nic,
 you are not using a holdingdisk, so as far as i recall,
 you are getting the maximum possible performance out of your setup.
 Why?
 without Holdingdisk, amanda will fetch all your dumps one after the other,
 no matter what you set inparallel to in amada.conf.


As I had mention in last mail, I had doubt  so I had remove its settings.
But now I will use it.




 Or has that behavior changend for newer versions of amanda?

 You are limited by the speed of your client-nics, 100mBit/sec means max 11
 MByte/sec.
 and as a short calculation this leads to roughly 3 to 4 days backup-time.

 if your NAS has a 1000Mbit-Nic, and if the systems are connected together
 by a
 1GBit/sec switch then do yourself a favor and put a holdingdisk into your
 server,
 i would suggest a sata-disk with around 2 times the capacity of the
 largest DLE you have.
 It will cut Backuptime dramatically, as amanda will start dumping many
 hosts in parallel.

 But if your nas only has a 100MBit NIC or you don't have a Gbit switch
 you'll never get
 amanda faster than now, nor any other backup solution.



True. About network limitation I do understand.
But if these backup process will execute in parallel, then I will get
expected performance.
I will use holding disk  test it again.

Thanks a lot.



 Hope that helps
 Christoph

 Am 15.03.2013 07:41, schrieb Amit Karpe:

 I am sharing her more Info:

 cpu usage

 On server (Intel® Xeon® series Quad core processors @ 2.66GHz)

 # ps -eo pcpu,pid,user,args | sort -r -k1 | head
 %CPU   PID USER COMMAND
   6.0 26873 33   /usr/bin/gzip --fast
   4.3 26906 33   /usr/bin/gzip --fast
 27.7 30002 ntop ntop
   2.1 26517 33   dumper3 DailySet2
   2.1 26515 33   dumper1 DailySet2
   1.4  1851 root [nfsiod]
   1.2  1685 nobody   /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i
 /var/run/dirsrv/slapd-borneo.**pid -w /var/run/dirsrv/slapd-borneo.**
 startpid
   1.0 27603 root ps -eo pcpu,pid,user,args
   1.0  2135 root [nfsd]

 But on client is always 80%-90% cpu usage. So I am planning to use
 compression server fast.


 parallel:
 Though I am using inparallel option in config file, I am not sure whether
 multiple dumper or other process running parallel or not !
   inparallel 30   #performance
  maxdumps 5  #performance


 netusage:
 I read on forum that netusage is obsolete option, but still I have tried
 to
 play around from 8m to 8000m, but no grt success. What should it value
 for netusage
 ? If my server having NIC support for 1000 Mbps.

 maxdumps:
 I have changed it from one to five. How to make sure whether its working
 or
 not ?

 I have tested 15GB backup by changing above parameters for 50+ times. I
 see
 its improvement in performance only 5%. i.e. I reduce backup time from
 18min to 15min. Can someone guide me to improve it further ?


 Client System: These normal ten workstation with 4GB RAM, Xeon duel core
 2.5GHz, 100 Mbps NIC.
 Those having 200G to 800G data, but number of files are far more in
 numbers.
 Just to give idea:
 # find /disk1 | wc -l
 647139
 # df -h /disk1
 FilesystemSize  Used Avail Use% Mounted on
 /dev/cciss/c0d2   1.8T  634G  1.1T  37% /disk1

 or
 # du -sh .
 202G .
 # find | wc -l
 707172

 I have tried with amplot I have found these outputs:
 amdump.1https://www.dropbox.**com/sh/qhh16izq5z43iqj/**
 hx6uplXRUp/20130315094305.pshttps://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps
 
 amdump.2https://www.dropbox.**com/sh/qhh16izq5z43iqj/**
 7IecwXLIUp/20130315105836.pshttps://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps
 

 Sorry but I could not understand these plot. I think it just cover first
 one min information.

 Thank you all those you are helping and answering my dumb questions.





-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-15 Thread Amit Karpe
I am sharing her more Info:

cpu usage

On server (Intel® Xeon® series Quad core processors @ 2.66GHz)
# ps -eo pcpu,pid,user,args | sort -r -k1 | head
%CPU   PID USER COMMAND
 6.0 26873 33   /usr/bin/gzip --fast
 4.3 26906 33   /usr/bin/gzip --fast
27.7 30002 ntop ntop
 2.1 26517 33   dumper3 DailySet2
 2.1 26515 33   dumper1 DailySet2
 1.4  1851 root [nfsiod]
 1.2  1685 nobody   /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i
/var/run/dirsrv/slapd-borneo.pid -w /var/run/dirsrv/slapd-borneo.startpid
 1.0 27603 root ps -eo pcpu,pid,user,args
 1.0  2135 root [nfsd]

But on client is always 80%-90% cpu usage. So I am planning to use
compression server fast.


parallel:
Though I am using inparallel option in config file, I am not sure whether
multiple dumper or other process running parallel or not !
 inparallel 30   #performance
maxdumps 5  #performance


netusage:
I read on forum that netusage is obsolete option, but still I have tried to
play around from 8m to 8000m, but no grt success. What should it value
for netusage
? If my server having NIC support for 1000 Mbps.

maxdumps:
I have changed it from one to five. How to make sure whether its working or
not ?

I have tested 15GB backup by changing above parameters for 50+ times. I see
its improvement in performance only 5%. i.e. I reduce backup time from
18min to 15min. Can someone guide me to improve it further ?


Client System: These normal ten workstation with 4GB RAM, Xeon duel core
2.5GHz, 100 Mbps NIC.
Those having 200G to 800G data, but number of files are far more in numbers.
Just to give idea:
# find /disk1 | wc -l
647139
# df -h /disk1
FilesystemSize  Used Avail Use% Mounted on
/dev/cciss/c0d2   1.8T  634G  1.1T  37% /disk1

or
# du -sh .
202G .
# find | wc -l
707172

I have tried with amplot I have found these outputs:
amdump.1https://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps
amdump.2https://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps
Sorry but I could not understand these plot. I think it just cover first
one min information.

Thank you all those you are helping and answering my dumb questions.


Re: Amanda Performance

2013-03-14 Thread Amit Karpe
Thanks gurus, let me share more info about my setup:

Network-speed:
On main server:
# dmesg | grep -i duplex
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex

On few clients:
# dmesg | grep -i duplex
tg3: eth0: Link is up at 1000 Mbps, full duplex.
# dmesg | grep -i duplex
[   13.122791] e1000e: em1 NIC Link is Up 100 Mbps Full Duplex, Flow
Control: None
# dmesg | grep -i duplex
[   69.739204] e1000e: em1 NIC Link is Up 100 Mbps Full Duplex, Flow
Control: Rx/Tx


Selected lines of amanda-config
amanda-config for DailySet2 -- http://pastebin.com/42z3RYZb
Mostly I am using dumptype local-files, remote-files, remote-files1.
Above conf I used for weekly backup.

amanda-config for final -- http://pastebin.com/QbHawKnz
Above conf I used for monthly backup. Which end with 2.6TB of backup which
took 4-5 days. Very recently I have did some fine tuning which I was
expected to improve the performance but it was not more than 5%. Like
following entries:
 device_output_buffer_size   128m#performance
chunksize   1000 GB #performance
inparallel 30   #performance
maxdumps 5  #performance
blocksize   8192 kbytes #performance
readblocksize   8 m #performance

Even I was thinking as I am using same disk as holding disk  final
backup destination. So I had disable use of holding disk by using following
parameter, but no much success.
holdingdisknever   #performance


We having NAS for Holding Disk as well as final backup in virtual tapes. (QNAP
TS-879 
Prohttp://www.qnap.com/en/index.php?lang=ensn=822c=351sc=698t=701n=3423
)
# df -h /nasbackup/
Filesystem  Size  Used Avail
Use% Mounted on
172.21.124.65:/nasbackup19T  8.3T   11T  44%
/nasbackup


I have tested  distributed dle that few dle will have compression on
server few will have compression on
 clients.

I did not able to observe parallel processing. I can see only one dumping
at a time:
-bash-4.0$ amstatus DailySet2  | grep dumping
bengkulu:/var  0 8g dumping6g ( 73.75%) (11:52:57)
wait for dumping:   00g   (  0.00%)
dumping to tape :   00g   (  0.00%)
dumping :   1 6g 8g ( 73.75%) ( 18.47%)
-bash-4.0$

I can see only one file get updated in holding disk.
# ls -lh /nasbackup/dumps/amanda1/2*
total 7.7G
-rw---+ 1 amandabackup disk 7.7G 2013-03-15 12:02 bengkulu._var.0.tmp

Can some explain me how to achieve parallel processing ?


I am adding more info in next mail.


Thanks again.

On Thu, Mar 14, 2013 at 9:22 PM, Brian Cuttler br...@wadsworth.org wrote:


 Amit,

 I don't think you told us how many client systems, compression
 can be done on the client or the server. Also, besides the inparallel
 and maxdump settings, are you short on work area - as Jean-Louis
 said, the amplot output will help you spot those bottlenecks.

 Brian

 On Thu, Mar 14, 2013 at 08:27:11AM -0400, Jean-Louis Martineau wrote:
  Compression is often a CPU bottleneck, did you check for cpu usage? You
  can try to use pigz instead of gzip if you have available core.
 
  How many dump are you doing in parallel? You can try to increase
  inparallel, netusage and/or maxdumps.
 
  You can use amplot and amstatus to check amanda performance.
 
  Jean-Louis
 
 
  On 03/13/2013 10:44 PM, Amit Karpe wrote:
  Hi all,
  I am using Amanda to take backup weekly  monthly. For monthly backup
  which is 2.5 to 2.7TB in size after backup with compression, it take
  4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs,
  from 10 different host in network. I am backuping on NAS, where I have
  19T total space.)
  Off course there are various parameter we have to consider to claim
  whether it is slow process or not.
  Could you please let me know how should I check and compare whether my
  backup process is slow or not ?
  Which are main parameter which affect Amanda Performance ?
  Which tool I should use to check Amanda Performance ?
  Currently I am using following steps:
  
  1. I have started monthly backup.
  2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking
  Backup Server to NAS bandwidth usage  trafic status.
  3. Using iotop I am checking status / speed of io operation.
  4. There are other few tools, which may help to understand io, had
  disk usage. But as my backup directory is not a local device, (I have
  mounted as nfs directory) I can't run hdparm or iostat directly.
  5. Monitoring NAS's admin interface for its bandwidth usage.
  6. Currently I am checking for some spastics, which help to compare
  with my current setup.
  
  Still I can't understand whether I going right way or not !
  It will be if you help me here.
  
  --
  Regards
  Amit Karpe.
  http://www.amitkarpe.com/
  http://news.karpe.net.in/
 
 ---
Brian R Cuttler brian.cutt...@wadsworth.org
Computer

Amanda Performance

2013-03-13 Thread Amit Karpe
Hi all,
I am using Amanda to take backup weekly  monthly. For monthly backup which
is 2.5 to 2.7TB in size after backup with compression, it take 4-5 days.
(Total size is around 6-7 TB, and there 52 entries DLEs, from 10 different
host in network. I am backuping on NAS, where I have 19T total space.)
Off course there are various parameter we have to consider to claim whether
it is slow process or not.
Could you please let me know how should I check and compare whether my
backup process is slow or not ?
Which are main parameter which affect Amanda Performance ?
Which tool I should use to check Amanda Performance ?
Currently I am using following steps:

1. I have started monthly backup.
2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup
Server to NAS bandwidth usage  trafic status.
3. Using iotop I am checking status / speed of io operation.
4. There are other few tools, which may help to understand io, had disk
usage. But as my backup directory is not a local device, (I have mounted as
nfs directory) I can't run hdparm or iostat directly.
5. Monitoring NAS's admin interface for its bandwidth usage.
6. Currently I am checking for some spastics, which help to compare with my
current setup.

Still I can't understand whether I going right way or not !
It will be if you help me here.

-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Amanda Performance

2013-03-13 Thread Amit Karpe
Just to more info:

-bash-4.0$ rpm -qa | grep amanda
amanda-client-2.6.0p2-9.fc11.x86_64
amanda-2.6.0p2-9.fc11.x86_64
amanda-devel-2.6.0p2-9.fc11.x86_64
amanda-server-2.6.0p2-9.fc11.x86_64
-bash-4.0$ uname -a
Linux borneo 2.6.30.10-105.2.23.fc11.x86_64 #1 SMP Thu Feb 11 07:06:34 UTC
2010 x86_64 x86_64 x86_64 GNU/Linux


On Thu, Mar 14, 2013 at 10:44 AM, Amit Karpe amitka...@gmail.com wrote:

 Hi all,
 I am using Amanda to take backup weekly  monthly. For monthly backup
 which is 2.5 to 2.7TB in size after backup with compression, it take 4-5
 days. (Total size is around 6-7 TB, and there 52 entries DLEs, from 10
 different host in network. I am backuping on NAS, where I have 19T total
 space.)
 Off course there are various parameter we have to consider to claim
 whether it is slow process or not.
 Could you please let me know how should I check and compare whether my
 backup process is slow or not ?
 Which are main parameter which affect Amanda Performance ?
 Which tool I should use to check Amanda Performance ?
 Currently I am using following steps:

 1. I have started monthly backup.
 2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup
 Server to NAS bandwidth usage  trafic status.
 3. Using iotop I am checking status / speed of io operation.
 4. There are other few tools, which may help to understand io, had disk
 usage. But as my backup directory is not a local device, (I have mounted as
 nfs directory) I can't run hdparm or iostat directly.
 5. Monitoring NAS's admin interface for its bandwidth usage.
 6. Currently I am checking for some spastics, which help to compare with
 my current setup.

 Still I can't understand whether I going right way or not !
 It will be if you help me here.

 --
 Regards
 Amit Karpe.
 http://www.amitkarpe.com/
 http://news.karpe.net.in/




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Error for amlabel

2013-02-28 Thread Amit Karpe
On Tue, Feb 26, 2013 at 3:50 AM, Marcus Pless mpl...@ucsd.edu wrote:

 On 02/25/2013 01:14:22 AM, Amit Karpe wrote:

 On Mon, Feb 25, 2013 at 4:55 PM, Amit Karpe amitka...@gmail.com wrote:

 
 
  On Fri, Nov 30, 2012 at 12:32 PM, Jon LaBadie j...@jgcomp.com wrote:
 
  On Fri, Nov 30, 2012 at 11:20:37AM +0800, Amit Karpe wrote:
   Any idea for my problem.
 
  The last line of your first message seems critical:
 
  ...
 
I could not able to write anything using amdump or dd.
 
  If you can't read or write using standard system tools
  like dd, there is nothing amanda can do about it.
  Sounds to me like you have hardware problems, either
  the tape drive or maybe the cables.
 
  Jon
  --
  Jon H. LaBadie j...@jgcomp.com
   11226 South Shore Rd.  (703) 787-0688 (H)
   Reston, VA  20190  (609) 477-8330 (C)
 
 
 
  I had ask HP for support. Their log tool saying my labels are the
 culprit.
  He is asking me to use only barcode labeling:
  As per the Amanda Tool output and logs there is one cartridge in the
 MSL
  which has either missing or worng barcode label or the barcode label is
  properly aligned as per Tape Cartridge
 
  Please remove all the tapes and check if you find such cartridge and if
  you find that then put the Barcode Lable correctly on it and insert it
 back
  in the same slot
 
  This error occurs because during initialzation before beackup the MSL
 take
  the Inventory of the cartridges in the MSL and if there is any mislatch
  cartridge, it halts the process of backup
 
 
  As far as my understanding amanda don't required any barcode labeling.
  Then what is a problem ?
  How can I cross check and confirm that hardware problems, either the
 tape
  drive or maybe the cables. ??
 
 
 

 Following link have photos of my tapes. Can someone confirm that having
 such label will be the problem ???

 http://news.karpe.net.in/hp-**storageworks-msl2024-tape-**library-1-lto-4http://news.karpe.net.in/hp-storageworks-msl2024-tape-library-1-lto-4




 My old LTO-1 and LTO-2 tape libraries didn't care about barcode labels.
 My LTO-4 (StorageTek SL-500) libraries won't load a tape and bring it
 online unless it has a correctly formatted barcode label. You can buy
 them pre-printed but we just make our own using a simple Brother Ptouch
 label maker and 3/4 inch (18mm) tape. I think a proper label has up to
 8 alpha-numerics but the last 2 characters have to be L4 (if it's an
 LTO4 tape) or the library/drive won't know what to do with it.

 My cleaning cartridge with a factory barcode is labeled CLNU01CU.

 I like to do my own barcode labels so that the human readable label
 matches the amanda software label. The L4 extension doesn't actually
 show when I do something like an mtx status so I don't include that
 in the amanda label.

 IBM has a pdf of the LTO label spec here:

 http://www-1.ibm.com/support/**docview.wss?rs=543context=**
 STCVQ6Rq1=ssg1*uid=**ssg1S7000429loc=en_UScs=utf-**8lang=en+enhttp://www-1.ibm.com/support/docview.wss?rs=543context=STCVQ6Rq1=ssg1*uid=ssg1S7000429loc=en_UScs=utf-8lang=en+en

 or

 http://tinyurl.com/LTO-label-**spec http://tinyurl.com/LTO-label-spec


 Good luck!


Thank Marcus.
I will check these docs and will updated you.


Re: Error for amlabel

2013-02-25 Thread Amit Karpe
On Fri, Nov 30, 2012 at 12:32 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Fri, Nov 30, 2012 at 11:20:37AM +0800, Amit Karpe wrote:
  Any idea for my problem.

 The last line of your first message seems critical:

 ...

   I could not able to write anything using amdump or dd.

 If you can't read or write using standard system tools
 like dd, there is nothing amanda can do about it.
 Sounds to me like you have hardware problems, either
 the tape drive or maybe the cables.

 Jon
 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



I had ask HP for support. Their log tool saying my labels are the culprit.
He is asking me to use only barcode labeling:
As per the Amanda Tool output and logs there is one cartridge in the MSL
which has either missing or worng barcode label or the barcode label is
properly aligned as per Tape Cartridge

Please remove all the tapes and check if you find such cartridge and if you
find that then put the Barcode Lable correctly on it and insert it back in
the same slot

This error occurs because during initialzation before beackup the MSL take
the Inventory of the cartridges in the MSL and if there is any mislatch
cartridge, it halts the process of backup


As far as my understanding amanda don't required any barcode labeling. Then
what is a problem ?
How can I cross check and confirm that hardware problems, either the tape
drive or maybe the cables. ??


-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Error for amlabel

2013-02-25 Thread Amit Karpe
On Mon, Feb 25, 2013 at 4:55 PM, Amit Karpe amitka...@gmail.com wrote:



 On Fri, Nov 30, 2012 at 12:32 PM, Jon LaBadie j...@jgcomp.com wrote:

 On Fri, Nov 30, 2012 at 11:20:37AM +0800, Amit Karpe wrote:
  Any idea for my problem.

 The last line of your first message seems critical:

 ...

   I could not able to write anything using amdump or dd.

 If you can't read or write using standard system tools
 like dd, there is nothing amanda can do about it.
 Sounds to me like you have hardware problems, either
 the tape drive or maybe the cables.

 Jon
 --
 Jon H. LaBadie j...@jgcomp.com
  11226 South Shore Rd.  (703) 787-0688 (H)
  Reston, VA  20190  (609) 477-8330 (C)



 I had ask HP for support. Their log tool saying my labels are the culprit.
 He is asking me to use only barcode labeling:
 As per the Amanda Tool output and logs there is one cartridge in the MSL
 which has either missing or worng barcode label or the barcode label is
 properly aligned as per Tape Cartridge

 Please remove all the tapes and check if you find such cartridge and if
 you find that then put the Barcode Lable correctly on it and insert it back
 in the same slot

 This error occurs because during initialzation before beackup the MSL take
 the Inventory of the cartridges in the MSL and if there is any mislatch
 cartridge, it halts the process of backup


 As far as my understanding amanda don't required any barcode labeling.
 Then what is a problem ?
 How can I cross check and confirm that hardware problems, either the tape
 drive or maybe the cables. ??




Following link have photos of my tapes. Can someone confirm that having
such label will be the problem ???

http://news.karpe.net.in/hp-storageworks-msl2024-tape-library-1-lto-4



 --
 Regards
 Amit Karpe.
 http://www.amitkarpe.com/
 http://news.karpe.net.in/




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Error for amlabel

2012-11-29 Thread Amit Karpe
Any idea for my problem.
Very recently I buy new cleaning cartridge. I have confirm than cleaning
process works.
But while again trying for Labeling I got following errors.

-bash-4.0$ for ((i=1; $i=3;i++)); do amlabel DailySet1 DailySet1-$i slot
$i; done
labeling tape in slot 1 (tape:/dev/st0):
Reading label...
Error reading 32768 bytes from /dev/st0: Input/output error
Error reading Amanda header.
Found an unlabeled tape.
Writing label DailySet1-1..
Got EIO on /dev/st0, assuming end of tape.
amlabel: Error writing label.

labeling tape in slot 2 (tape:/dev/st0):
Reading label...
Found Amanda tape DailySet1-017
Writing label DailySet1-2..
Error writing final filemark: Input/output error
amlabel: Error closing device.

labeling tape in slot 3 (tape:/dev/st0):
Reading label...
Error reading 32768 bytes from /dev/st0: Input/output error
Error reading Amanda header.
Found an unlabeled tape.
Writing label DailySet1-3..
Got EIO on /dev/st0, assuming end of tape.
amlabel: Error writing label.

Someone please guide me !

On Thu, Oct 11, 2012 at 7:38 PM, Amit Karpe amitka...@gmail.com wrote:

 Hi All,
 While writing label by using following command I got these errors.
 -bash-4.0$ amlabel DailySet1 reset
 amlabel: label reset doesn't match labelstr ^DailySet1-[0-9][0-9]*$
 -bash-4.0$ amlabel DailySet1 reset
 labeling tape in slot 22 (tape:/dev/nst0):
 Reading label...
 Error reading 32768 bytes from /dev/nst0: Input/output error
 Error reading Amanda header.
 Found an unlabeled tape.
 Writing label reset..
 Error writing final filemark: Input/output error
 amlabel: Error closing device.


 -bash-4.0$ mtx -f /dev/sg2 status
   Storage Changer /dev/sg2:1 Drives, 24 Slots ( 1 Import/Export )
 Data Transfer Element 0:Full (Storage Element 22 Loaded)
   Storage Element 1:Empty
   Storage Element 2:Full
   Storage Element 3:Full
   Storage Element 4:Full
   Storage Element 5:Full
   Storage Element 6:Full
   Storage Element 7:Full
   Storage Element 8:Full
   Storage Element 9:Full
   Storage Element 10:Full
   Storage Element 11:Full
   Storage Element 12:Full
   Storage Element 13:Full
   Storage Element 14:Full
   Storage Element 15:Full
   Storage Element 16:Full
   Storage Element 17:Full
   Storage Element 18:Full
   Storage Element 19:Full
   Storage Element 20:Full
   Storage Element 21:Full
   Storage Element 22:Empty
   Storage Element 23:Full
   Storage Element 24 IMPORT/EXPORT:Empty


 #tail -f /var/log/messages
 Oct 11 18:58:37 borneo kernel: mptscsih: ioc0: task abort: SUCCESS
 (sc=88012db85600)
 Oct 11 18:58:37 borneo kernel: st0: Error 8 (driver bt 0x0, host bt
 0x8).
 Oct 11 19:22:30 borneo kernel: st0: Sense Key : Medium Error [current]
 Oct 11 19:22:30 borneo kernel: Info fld=0x1
 Oct 11 19:22:30 borneo kernel: st0: Add. Sense: Write error


 tpchanger chg-zd-mtx
 #tapedev tape:/dev/st0# the no-rewind tape device to be
 used
 tapedev tape:/dev/nst0# the no-rewind tape device to be
 used
 changerfile /etc/amanda/DailySet1/changer.conf
 changerdev /dev/sg2
 tapetype HP-LTO4# what kind of tape it is (see tapetypes
 below)
 define tapetype HP-LTO4 {
 comment just produced by tapetype prog (hardware compression on)
 length 772096 mbytes
 filemark 0 kbytes
 speed 71093 kps
 }


 Any suggestions ?
 I could not able to write anything using amdump or dd.




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/