Re: Amanda Performance

2013-03-19 Thread Amit Karpe
On Mon, Mar 18, 2013 at 10:44 PM, Gene Heskett ghesk...@wdtv.com wrote:

 On Monday 18 March 2013 10:06:59 Amit Karpe did opine:

  Reply in-line:
 
  On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote:
   On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote:
Brian,
Yes, initially I was using holdingdisk for all config. But when I
see backup is low and taking more time, I had doubt that my backup
 holdingdisk are on same disk (NAS) which is 19TB. So I change it
 force not to use it.
   
But now I realized its required so I will use for next usage of
Amanda.
   
Could someone let me know use of chunksize ?
It should big enough like 10GB or 50GB ?
As I am using Vtape, finally all these temporary files are going
to merge into one file. So why should we create these chunks ? Why
not
  
   let
  
dumper directly dump into final vtape's slot directory ?
  
   The holding disk chunksize was added to overcome the 2GB max file size
   limitation of some filesystems.  It is also useful if you allocate
   multiple holding disks, some which may not be big enough for your
   large DLEs.
  
  
   The parts of amanda doing the dumps do not peek to see where the data
   will eventually be stored.  Taper does that part and it is not called
   until a DLE is successfully dumped.  If you are going direct to tape
   taper is receiving the dump directly but then you can only do one DLE
   at a time.
  
   Jon
 
  So using  chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda
  to run dumper parallely ?

 There are, generally speaking, several aspects of doing backups in
 parallel.

 1. If you don't want your disk(s) to be thrashed by seeking, and that of
 course has a speed penalty, you must restrict the read operations to one
 file at a time per disk spindle, this is in the docs, man disklist I
 believe.


Yes, I got the idea. Will try in next backup process.


 2. The chunk size is to get around some file system limits that often cause
 things to go all aglay when fast integer math in the filesystem falls over
 at 2Gb on a single file.  It has nothing or very little to do with speed
 other than the overhead of breaking it up during the writes to the holding
 disk area, then splicing it back together as its sent down the cable to the
 storage media.  IOW it is to keep your OS from trashing the file as its
 being put in the holding area as a merged file from the directory tree
 specified in the disklist.  IOW your file is likely not the problem
 unless that file is a dvd image, but the merged output of tar or (spit)
 dump, can easily be more than 2Gb.

 I have one directory in my /home that will almost certainly have to be as
 separate files as I have debian-testing for 3 different architectures here.
 That would be about 30 disklist entries all by itself as there are 30 dvd
 images for the whole thing.

 Yes, now I am using Holding disk with big  chunk size 100GB.

3. Parallelism would probably be helped, given that the data moving
 bandwidth is sufficient, if more than one holding disk area was allocated,
 with each allocation being on a separate spindle/disk so that the holding
 disk itself would not be subjected to this same seek thrashing time killing
 IF you also had more than one storage drive being written in parallel.  If
 only one tape is mounted at a time in your setup, once you've taken action
 against seek thrashing of the source disk(s), the next thing is improving
 the bandwidth.

 Will try to use 2 holding disk in next backup process. And will udpate my
result here.



 This last however, may not be something that amanda has learned how to use
 effectively as AFAIK, there is not an optional 'spindle' number in the
 holding disk entry for that distinction.

 I am Not sure.


 Cheers, Gene
 --
 There are four boxes to be used in defense of liberty:
  soap, ballot, jury, and ammo. Please use in that order.
 -Ed Howdershelt (Author)
 My web page: http://coyoteden.dyndns-free.com:85/gene is up!
 My views
 http://www.armchairpatriot.com/What%20Has%20America%20Become.shtml
 Engineering without management is art.
 -- Jeff Johnson
 I was taught to respect my elders, but its getting
 harder and harder to find any...




-- 
Regards
Amit Karpe.
http://www.amitkarpe.com/
http://news.karpe.net.in/


Re: Is my File system corrupt ?

2013-03-19 Thread Heiko Schlittermann
Hi,

Amit Karpe amitka...@gmail.com (Di 19 Mär 2013 04:51:50 CET):
 Hi All,
 I am facing new problem with my NAS filesystem with particular directory. (
 Till yesterday I was trying to improve speed of my backup.)
 File system on NAS is working well as I can access other directories and
 even I can copy files from NAS to there system.
 
 But I can't access this directory /nasbackup/full/holdingdisk/20130318133816
 If I fire ls command is just hand there  even I can't kill using kill
 command.

If NFS is stuck and mounted as non-interruptible, you're out of luck.

 My question Is my File system corrupt ? How I can cross check ?
 Regular fsck I can't use I don't have access to NAS command prompt.
…

So probably via NFS, because iSCSI would give you a block device which
you could check.

 [root@borneo data]#
 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:[ cut here ]
 Message from syslogd@borneo at Mar 18 23:02:11 ...
  kernel:invalid opcode:  [#2] SMP

I'm not a kernel programmer, but invalid opcode does not look good.
May be your local file system is corrupted, giving you corrupted
binaries.

I'd say invalid opcode may be caused by invalid binaries (just some
defective bit) or by unreliable memory. (Run memtest…)


Best regards from Dresden/Germany
Viele Grüße aus Dresden
Heiko Schlittermann
-- 
 SCHLITTERMANN.de  internet  unix support -
 Heiko Schlittermann, Dipl.-Ing. (TU) - {fon,fax}: +49.351.802998{1,3} -
 gnupg encrypted messages are welcome --- key ID: 7CBF764A -
 gnupg fingerprint: 9288 F17D BBF9 9625 5ABC  285C 26A9 687E 7CBF 764A -
(gnupg fingerprint: 3061 CFBF 2D88 F034 E8D2  7E92 EE4E AC98 48D0 359B)-


signature.asc
Description: Digital signature


Re: Is my File system corrupt ?

2013-03-19 Thread Heiko Schlittermann
Amit Karpe amitka...@gmail.com (Di 19 Mär 2013 06:14:59 CET):
 
 Just now I have check manuals  now I have full access to this  NAS system.
 By using QNAP web interface I found that SMART Information for
 all Physical Disks is Good. Still investigating what can be the problem.

You can switch on SSH access to the NAS and try to run fsck on the file
system. - Just to make sure that the file system is not corrupted.

Best regards from Dresden/Germany
Viele Grüße aus Dresden
Heiko Schlittermann
-- 
 SCHLITTERMANN.de  internet  unix support -
 Heiko Schlittermann, Dipl.-Ing. (TU) - {fon,fax}: +49.351.802998{1,3} -
 gnupg encrypted messages are welcome --- key ID: 7CBF764A -
 gnupg fingerprint: 9288 F17D BBF9 9625 5ABC  285C 26A9 687E 7CBF 764A -
(gnupg fingerprint: 3061 CFBF 2D88 F034 E8D2  7E92 EE4E AC98 48D0 359B)-


signature.asc
Description: Digital signature