Re: Amanda Performance
On Mon, Mar 18, 2013 at 10:44 PM, Gene Heskett ghesk...@wdtv.com wrote: On Monday 18 March 2013 10:06:59 Amit Karpe did opine: Reply in-line: On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote: On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. But now I realized its required so I will use for next usage of Amanda. Could someone let me know use of chunksize ? It should big enough like 10GB or 50GB ? As I am using Vtape, finally all these temporary files are going to merge into one file. So why should we create these chunks ? Why not let dumper directly dump into final vtape's slot directory ? The holding disk chunksize was added to overcome the 2GB max file size limitation of some filesystems. It is also useful if you allocate multiple holding disks, some which may not be big enough for your large DLEs. The parts of amanda doing the dumps do not peek to see where the data will eventually be stored. Taper does that part and it is not called until a DLE is successfully dumped. If you are going direct to tape taper is receiving the dump directly but then you can only do one DLE at a time. Jon So using chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda to run dumper parallely ? There are, generally speaking, several aspects of doing backups in parallel. 1. If you don't want your disk(s) to be thrashed by seeking, and that of course has a speed penalty, you must restrict the read operations to one file at a time per disk spindle, this is in the docs, man disklist I believe. Yes, I got the idea. Will try in next backup process. 2. The chunk size is to get around some file system limits that often cause things to go all aglay when fast integer math in the filesystem falls over at 2Gb on a single file. It has nothing or very little to do with speed other than the overhead of breaking it up during the writes to the holding disk area, then splicing it back together as its sent down the cable to the storage media. IOW it is to keep your OS from trashing the file as its being put in the holding area as a merged file from the directory tree specified in the disklist. IOW your file is likely not the problem unless that file is a dvd image, but the merged output of tar or (spit) dump, can easily be more than 2Gb. I have one directory in my /home that will almost certainly have to be as separate files as I have debian-testing for 3 different architectures here. That would be about 30 disklist entries all by itself as there are 30 dvd images for the whole thing. Yes, now I am using Holding disk with big chunk size 100GB. 3. Parallelism would probably be helped, given that the data moving bandwidth is sufficient, if more than one holding disk area was allocated, with each allocation being on a separate spindle/disk so that the holding disk itself would not be subjected to this same seek thrashing time killing IF you also had more than one storage drive being written in parallel. If only one tape is mounted at a time in your setup, once you've taken action against seek thrashing of the source disk(s), the next thing is improving the bandwidth. Will try to use 2 holding disk in next backup process. And will udpate my result here. This last however, may not be something that amanda has learned how to use effectively as AFAIK, there is not an optional 'spindle' number in the holding disk entry for that distinction. I am Not sure. Cheers, Gene -- There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) My web page: http://coyoteden.dyndns-free.com:85/gene is up! My views http://www.armchairpatriot.com/What%20Has%20America%20Become.shtml Engineering without management is art. -- Jeff Johnson I was taught to respect my elders, but its getting harder and harder to find any... -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. But now I realized its required so I will use for next usage of Amanda. Could someone let me know use of chunksize ? It should big enough like 10GB or 50GB ? As I am using Vtape, finally all these temporary files are going to merge into one file. So why should we create these chunks ? Why not let dumper directly dump into final vtape's slot directory ? The holding disk chunksize was added to overcome the 2GB max file size limitation of some filesystems. It is also useful if you allocate multiple holding disks, some which may not be big enough for your large DLEs. The parts of amanda doing the dumps do not peek to see where the data will eventually be stored. Taper does that part and it is not called until a DLE is successfully dumped. If you are going direct to tape taper is receiving the dump directly but then you can only do one DLE at a time. Jon
Re: Amanda Performance
Hi Again, Am 18.03.2013 05:55, schrieb Amit Karpe: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. That holdingdisk on the NAS won't buy you much speed improvement, with the holdingdisk on the nas, all your data will be sent 4 times over the network, 1. from client to amanda server, 2. from amanda server to nas holdingdisk, 3. from nas holdingdisk back to amanda server 4. from amanda-server back to your vtape storage. and voila, there is your network-bottleneck again limiting you to roughly max-Networkbandwith/4 MByte/sec for the dumps. you realy need physical disk(s) in your amanda-machine for holdingspace to speed things up. Christoph
Re: Amanda Performance
Reply in-line: On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote: On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. But now I realized its required so I will use for next usage of Amanda. Could someone let me know use of chunksize ? It should big enough like 10GB or 50GB ? As I am using Vtape, finally all these temporary files are going to merge into one file. So why should we create these chunks ? Why not let dumper directly dump into final vtape's slot directory ? The holding disk chunksize was added to overcome the 2GB max file size limitation of some filesystems. It is also useful if you allocate multiple holding disks, some which may not be big enough for your large DLEs. The parts of amanda doing the dumps do not peek to see where the data will eventually be stored. Taper does that part and it is not called until a DLE is successfully dumped. If you are going direct to tape taper is receiving the dump directly but then you can only do one DLE at a time. Jon So using chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda to run dumper parallely ? -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
On Mon, Mar 18, 2013 at 3:04 PM, C.Scheeder christ...@scheeder.de wrote: Hi Again, Am 18.03.2013 05:55, schrieb Amit Karpe: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. That holdingdisk on the NAS won't buy you much speed improvement, with the holdingdisk on the nas, all your data will be sent 4 times over the network, 1. from client to amanda server, 2. from amanda server to nas holdingdisk, 3. from nas holdingdisk back to amanda server 4. from amanda-server back to your vtape storage. and voila, there is your network-bottleneck again limiting you to roughly max-Networkbandwith/4 MByte/sec for the dumps. you realy need physical disk(s) in your amanda-machine for holdingspace to speed things up. Christoph Thank you Christoph. I will try to configure arrange my backup server with extra disk space. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
On Monday 18 March 2013 10:06:59 Amit Karpe did opine: Reply in-line: On Mon, Mar 18, 2013 at 1:59 PM, Jon LaBadie j...@jgcomp.com wrote: On Mon, Mar 18, 2013 at 12:55:40PM +0800, Amit Karpe wrote: Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. But now I realized its required so I will use for next usage of Amanda. Could someone let me know use of chunksize ? It should big enough like 10GB or 50GB ? As I am using Vtape, finally all these temporary files are going to merge into one file. So why should we create these chunks ? Why not let dumper directly dump into final vtape's slot directory ? The holding disk chunksize was added to overcome the 2GB max file size limitation of some filesystems. It is also useful if you allocate multiple holding disks, some which may not be big enough for your large DLEs. The parts of amanda doing the dumps do not peek to see where the data will eventually be stored. Taper does that part and it is not called until a DLE is successfully dumped. If you are going direct to tape taper is receiving the dump directly but then you can only do one DLE at a time. Jon So using chunksize 10GB/ 50GB/ 100GB kind of option, will help amanda to run dumper parallely ? There are, generally speaking, several aspects of doing backups in parallel. 1. If you don't want your disk(s) to be thrashed by seeking, and that of course has a speed penalty, you must restrict the read operations to one file at a time per disk spindle, this is in the docs, man disklist I believe. 2. The chunk size is to get around some file system limits that often cause things to go all aglay when fast integer math in the filesystem falls over at 2Gb on a single file. It has nothing or very little to do with speed other than the overhead of breaking it up during the writes to the holding disk area, then splicing it back together as its sent down the cable to the storage media. IOW it is to keep your OS from trashing the file as its being put in the holding area as a merged file from the directory tree specified in the disklist. IOW your file is likely not the problem unless that file is a dvd image, but the merged output of tar or (spit) dump, can easily be more than 2Gb. I have one directory in my /home that will almost certainly have to be as separate files as I have debian-testing for 3 different architectures here. That would be about 30 disklist entries all by itself as there are 30 dvd images for the whole thing. 3. Parallelism would probably be helped, given that the data moving bandwidth is sufficient, if more than one holding disk area was allocated, with each allocation being on a separate spindle/disk so that the holding disk itself would not be subjected to this same seek thrashing time killing IF you also had more than one storage drive being written in parallel. If only one tape is mounted at a time in your setup, once you've taken action against seek thrashing of the source disk(s), the next thing is improving the bandwidth. This last however, may not be something that amanda has learned how to use effectively as AFAIK, there is not an optional 'spindle' number in the holding disk entry for that distinction. Cheers, Gene -- There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) My web page: http://coyoteden.dyndns-free.com:85/gene is up! My views http://www.armchairpatriot.com/What%20Has%20America%20Become.shtml Engineering without management is art. -- Jeff Johnson I was taught to respect my elders, but its getting harder and harder to find any...
Re: Amanda Performance
Brian, Yes, initially I was using holdingdisk for all config. But when I see backup is low and taking more time, I had doubt that my backup holdingdisk are on same disk (NAS) which is 19TB. So I change it force not to use it. But now I realized its required so I will use for next usage of Amanda. Could someone let me know use of chunksize ? It should big enough like 10GB or 50GB ? As I am using Vtape, finally all these temporary files are going to merge into one file. So why should we create these chunks ? Why not let dumper directly dump into final vtape's slot directory ? On Fri, Mar 15, 2013 at 9:15 PM, Brian Cuttler br...@wadsworth.org wrote: Amit, Did I understand you to say that you are not using an amanda work area, an area on the server for temporary files? Brian On Fri, Mar 15, 2013 at 08:15:38AM -0400, Jean-Louis Martineau wrote: On 03/15/2013 12:11 AM, Amit Karpe wrote: I did not able to observe parallel processing. I can see only one dumping at a time: -bash-4.0$ amstatus DailySet2 | grep dumping bengkulu:/var 0 8g dumping6g ( 73.75%) (11:52:57) wait for dumping: 00g ( 0.00%) dumping to tape : 00g ( 0.00%) dumping : 1 6g 8g ( 73.75%) ( 18.47%) -bash-4.0$ amstatus have so much more information, can you post the complete output or better, post the amdump.X file. Can you also post the email report or the log.datastamp.0 file. You posted a lot of number about your hardware and you said you monitor it, but you never said how much you are close to the hardware limit. You posted no number about amanda performance (except total time and size) and which number you think can be improved. Jean-Louis --- Brian R Cuttler brian.cutt...@wadsworth.org Computer Systems Support(v) 518 486-1697 Wadsworth Center(f) 518 473-6384 NYS Department of HealthHelp Desk 518 473-0773 -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
Christoph, My reply's are inline: On Fri, Mar 15, 2013 at 9:29 PM, C. Scheeder christ...@scheeder.de wrote: Hi, Summarizing up: your clients have 100Mbit-Nics, your server has a 1000Mbit-Nic, you are not using a holdingdisk, so as far as i recall, you are getting the maximum possible performance out of your setup. Why? without Holdingdisk, amanda will fetch all your dumps one after the other, no matter what you set inparallel to in amada.conf. As I had mention in last mail, I had doubt so I had remove its settings. But now I will use it. Or has that behavior changend for newer versions of amanda? You are limited by the speed of your client-nics, 100mBit/sec means max 11 MByte/sec. and as a short calculation this leads to roughly 3 to 4 days backup-time. if your NAS has a 1000Mbit-Nic, and if the systems are connected together by a 1GBit/sec switch then do yourself a favor and put a holdingdisk into your server, i would suggest a sata-disk with around 2 times the capacity of the largest DLE you have. It will cut Backuptime dramatically, as amanda will start dumping many hosts in parallel. But if your nas only has a 100MBit NIC or you don't have a Gbit switch you'll never get amanda faster than now, nor any other backup solution. True. About network limitation I do understand. But if these backup process will execute in parallel, then I will get expected performance. I will use holding disk test it again. Thanks a lot. Hope that helps Christoph Am 15.03.2013 07:41, schrieb Amit Karpe: I am sharing her more Info: cpu usage On server (Intel® Xeon® series Quad core processors @ 2.66GHz) # ps -eo pcpu,pid,user,args | sort -r -k1 | head %CPU PID USER COMMAND 6.0 26873 33 /usr/bin/gzip --fast 4.3 26906 33 /usr/bin/gzip --fast 27.7 30002 ntop ntop 2.1 26517 33 dumper3 DailySet2 2.1 26515 33 dumper1 DailySet2 1.4 1851 root [nfsiod] 1.2 1685 nobody /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i /var/run/dirsrv/slapd-borneo.**pid -w /var/run/dirsrv/slapd-borneo.** startpid 1.0 27603 root ps -eo pcpu,pid,user,args 1.0 2135 root [nfsd] But on client is always 80%-90% cpu usage. So I am planning to use compression server fast. parallel: Though I am using inparallel option in config file, I am not sure whether multiple dumper or other process running parallel or not ! inparallel 30 #performance maxdumps 5 #performance netusage: I read on forum that netusage is obsolete option, but still I have tried to play around from 8m to 8000m, but no grt success. What should it value for netusage ? If my server having NIC support for 1000 Mbps. maxdumps: I have changed it from one to five. How to make sure whether its working or not ? I have tested 15GB backup by changing above parameters for 50+ times. I see its improvement in performance only 5%. i.e. I reduce backup time from 18min to 15min. Can someone guide me to improve it further ? Client System: These normal ten workstation with 4GB RAM, Xeon duel core 2.5GHz, 100 Mbps NIC. Those having 200G to 800G data, but number of files are far more in numbers. Just to give idea: # find /disk1 | wc -l 647139 # df -h /disk1 FilesystemSize Used Avail Use% Mounted on /dev/cciss/c0d2 1.8T 634G 1.1T 37% /disk1 or # du -sh . 202G . # find | wc -l 707172 I have tried with amplot I have found these outputs: amdump.1https://www.dropbox.**com/sh/qhh16izq5z43iqj/** hx6uplXRUp/20130315094305.pshttps://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps amdump.2https://www.dropbox.**com/sh/qhh16izq5z43iqj/** 7IecwXLIUp/20130315105836.pshttps://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps Sorry but I could not understand these plot. I think it just cover first one min information. Thank you all those you are helping and answering my dumb questions. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
I am sharing her more Info: cpu usage On server (Intel® Xeon® series Quad core processors @ 2.66GHz) # ps -eo pcpu,pid,user,args | sort -r -k1 | head %CPU PID USER COMMAND 6.0 26873 33 /usr/bin/gzip --fast 4.3 26906 33 /usr/bin/gzip --fast 27.7 30002 ntop ntop 2.1 26517 33 dumper3 DailySet2 2.1 26515 33 dumper1 DailySet2 1.4 1851 root [nfsiod] 1.2 1685 nobody /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i /var/run/dirsrv/slapd-borneo.pid -w /var/run/dirsrv/slapd-borneo.startpid 1.0 27603 root ps -eo pcpu,pid,user,args 1.0 2135 root [nfsd] But on client is always 80%-90% cpu usage. So I am planning to use compression server fast. parallel: Though I am using inparallel option in config file, I am not sure whether multiple dumper or other process running parallel or not ! inparallel 30 #performance maxdumps 5 #performance netusage: I read on forum that netusage is obsolete option, but still I have tried to play around from 8m to 8000m, but no grt success. What should it value for netusage ? If my server having NIC support for 1000 Mbps. maxdumps: I have changed it from one to five. How to make sure whether its working or not ? I have tested 15GB backup by changing above parameters for 50+ times. I see its improvement in performance only 5%. i.e. I reduce backup time from 18min to 15min. Can someone guide me to improve it further ? Client System: These normal ten workstation with 4GB RAM, Xeon duel core 2.5GHz, 100 Mbps NIC. Those having 200G to 800G data, but number of files are far more in numbers. Just to give idea: # find /disk1 | wc -l 647139 # df -h /disk1 FilesystemSize Used Avail Use% Mounted on /dev/cciss/c0d2 1.8T 634G 1.1T 37% /disk1 or # du -sh . 202G . # find | wc -l 707172 I have tried with amplot I have found these outputs: amdump.1https://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps amdump.2https://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps Sorry but I could not understand these plot. I think it just cover first one min information. Thank you all those you are helping and answering my dumb questions.
Re: Amanda Performance
Amit, Did I understand you to say that you are not using an amanda work area, an area on the server for temporary files? Brian On Fri, Mar 15, 2013 at 08:15:38AM -0400, Jean-Louis Martineau wrote: On 03/15/2013 12:11 AM, Amit Karpe wrote: I did not able to observe parallel processing. I can see only one dumping at a time: -bash-4.0$ amstatus DailySet2 | grep dumping bengkulu:/var 0 8g dumping6g ( 73.75%) (11:52:57) wait for dumping: 00g ( 0.00%) dumping to tape : 00g ( 0.00%) dumping : 1 6g 8g ( 73.75%) ( 18.47%) -bash-4.0$ amstatus have so much more information, can you post the complete output or better, post the amdump.X file. Can you also post the email report or the log.datastamp.0 file. You posted a lot of number about your hardware and you said you monitor it, but you never said how much you are close to the hardware limit. You posted no number about amanda performance (except total time and size) and which number you think can be improved. Jean-Louis --- Brian R Cuttler brian.cutt...@wadsworth.org Computer Systems Support(v) 518 486-1697 Wadsworth Center(f) 518 473-6384 NYS Department of HealthHelp Desk 518 473-0773
Re: Amanda Performance
Hi, Summarizing up: your clients have 100Mbit-Nics, your server has a 1000Mbit-Nic, you are not using a holdingdisk, so as far as i recall, you are getting the maximum possible performance out of your setup. Why? without Holdingdisk, amanda will fetch all your dumps one after the other, no matter what you set inparallel to in amada.conf. Or has that behavior changend for newer versions of amanda? You are limited by the speed of your client-nics, 100mBit/sec means max 11 MByte/sec. and as a short calculation this leads to roughly 3 to 4 days backup-time. if your NAS has a 1000Mbit-Nic, and if the systems are connected together by a 1GBit/sec switch then do yourself a favor and put a holdingdisk into your server, i would suggest a sata-disk with around 2 times the capacity of the largest DLE you have. It will cut Backuptime dramatically, as amanda will start dumping many hosts in parallel. But if your nas only has a 100MBit NIC or you don't have a Gbit switch you'll never get amanda faster than now, nor any other backup solution. Hope that helps Christoph Am 15.03.2013 07:41, schrieb Amit Karpe: I am sharing her more Info: cpu usage On server (Intel® Xeon® series Quad core processors @ 2.66GHz) # ps -eo pcpu,pid,user,args | sort -r -k1 | head %CPU PID USER COMMAND 6.0 26873 33 /usr/bin/gzip --fast 4.3 26906 33 /usr/bin/gzip --fast 27.7 30002 ntop ntop 2.1 26517 33 dumper3 DailySet2 2.1 26515 33 dumper1 DailySet2 1.4 1851 root [nfsiod] 1.2 1685 nobody /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-borneo -i /var/run/dirsrv/slapd-borneo.pid -w /var/run/dirsrv/slapd-borneo.startpid 1.0 27603 root ps -eo pcpu,pid,user,args 1.0 2135 root [nfsd] But on client is always 80%-90% cpu usage. So I am planning to use compression server fast. parallel: Though I am using inparallel option in config file, I am not sure whether multiple dumper or other process running parallel or not ! inparallel 30 #performance maxdumps 5 #performance netusage: I read on forum that netusage is obsolete option, but still I have tried to play around from 8m to 8000m, but no grt success. What should it value for netusage ? If my server having NIC support for 1000 Mbps. maxdumps: I have changed it from one to five. How to make sure whether its working or not ? I have tested 15GB backup by changing above parameters for 50+ times. I see its improvement in performance only 5%. i.e. I reduce backup time from 18min to 15min. Can someone guide me to improve it further ? Client System: These normal ten workstation with 4GB RAM, Xeon duel core 2.5GHz, 100 Mbps NIC. Those having 200G to 800G data, but number of files are far more in numbers. Just to give idea: # find /disk1 | wc -l 647139 # df -h /disk1 FilesystemSize Used Avail Use% Mounted on /dev/cciss/c0d2 1.8T 634G 1.1T 37% /disk1 or # du -sh . 202G . # find | wc -l 707172 I have tried with amplot I have found these outputs: amdump.1https://www.dropbox.com/sh/qhh16izq5z43iqj/hx6uplXRUp/20130315094305.ps amdump.2https://www.dropbox.com/sh/qhh16izq5z43iqj/7IecwXLIUp/20130315105836.ps Sorry but I could not understand these plot. I think it just cover first one min information. Thank you all those you are helping and answering my dumb questions.
Re: Amanda Performance
Compression is often a CPU bottleneck, did you check for cpu usage? You can try to use pigz instead of gzip if you have available core. How many dump are you doing in parallel? You can try to increase inparallel, netusage and/or maxdumps. You can use amplot and amstatus to check amanda performance. Jean-Louis On 03/13/2013 10:44 PM, Amit Karpe wrote: Hi all, I am using Amanda to take backup weekly monthly. For monthly backup which is 2.5 to 2.7TB in size after backup with compression, it take 4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs, from 10 different host in network. I am backuping on NAS, where I have 19T total space.) Off course there are various parameter we have to consider to claim whether it is slow process or not. Could you please let me know how should I check and compare whether my backup process is slow or not ? Which are main parameter which affect Amanda Performance ? Which tool I should use to check Amanda Performance ? Currently I am using following steps: 1. I have started monthly backup. 2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup Server to NAS bandwidth usage trafic status. 3. Using iotop I am checking status / speed of io operation. 4. There are other few tools, which may help to understand io, had disk usage. But as my backup directory is not a local device, (I have mounted as nfs directory) I can't run hdparm or iostat directly. 5. Monitoring NAS's admin interface for its bandwidth usage. 6. Currently I am checking for some spastics, which help to compare with my current setup. Still I can't understand whether I going right way or not ! It will be if you help me here. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: Amanda Performance
Amit, I don't think you told us how many client systems, compression can be done on the client or the server. Also, besides the inparallel and maxdump settings, are you short on work area - as Jean-Louis said, the amplot output will help you spot those bottlenecks. Brian On Thu, Mar 14, 2013 at 08:27:11AM -0400, Jean-Louis Martineau wrote: Compression is often a CPU bottleneck, did you check for cpu usage? You can try to use pigz instead of gzip if you have available core. How many dump are you doing in parallel? You can try to increase inparallel, netusage and/or maxdumps. You can use amplot and amstatus to check amanda performance. Jean-Louis On 03/13/2013 10:44 PM, Amit Karpe wrote: Hi all, I am using Amanda to take backup weekly monthly. For monthly backup which is 2.5 to 2.7TB in size after backup with compression, it take 4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs, from 10 different host in network. I am backuping on NAS, where I have 19T total space.) Off course there are various parameter we have to consider to claim whether it is slow process or not. Could you please let me know how should I check and compare whether my backup process is slow or not ? Which are main parameter which affect Amanda Performance ? Which tool I should use to check Amanda Performance ? Currently I am using following steps: 1. I have started monthly backup. 2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup Server to NAS bandwidth usage trafic status. 3. Using iotop I am checking status / speed of io operation. 4. There are other few tools, which may help to understand io, had disk usage. But as my backup directory is not a local device, (I have mounted as nfs directory) I can't run hdparm or iostat directly. 5. Monitoring NAS's admin interface for its bandwidth usage. 6. Currently I am checking for some spastics, which help to compare with my current setup. Still I can't understand whether I going right way or not ! It will be if you help me here. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/ --- Brian R Cuttler brian.cutt...@wadsworth.org Computer Systems Support(v) 518 486-1697 Wadsworth Center(f) 518 473-6384 NYS Department of HealthHelp Desk 518 473-0773
Re: Amanda Performance
Thanks gurus, let me share more info about my setup: Network-speed: On main server: # dmesg | grep -i duplex bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex On few clients: # dmesg | grep -i duplex tg3: eth0: Link is up at 1000 Mbps, full duplex. # dmesg | grep -i duplex [ 13.122791] e1000e: em1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None # dmesg | grep -i duplex [ 69.739204] e1000e: em1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx Selected lines of amanda-config amanda-config for DailySet2 -- http://pastebin.com/42z3RYZb Mostly I am using dumptype local-files, remote-files, remote-files1. Above conf I used for weekly backup. amanda-config for final -- http://pastebin.com/QbHawKnz Above conf I used for monthly backup. Which end with 2.6TB of backup which took 4-5 days. Very recently I have did some fine tuning which I was expected to improve the performance but it was not more than 5%. Like following entries: device_output_buffer_size 128m#performance chunksize 1000 GB #performance inparallel 30 #performance maxdumps 5 #performance blocksize 8192 kbytes #performance readblocksize 8 m #performance Even I was thinking as I am using same disk as holding disk final backup destination. So I had disable use of holding disk by using following parameter, but no much success. holdingdisknever #performance We having NAS for Holding Disk as well as final backup in virtual tapes. (QNAP TS-879 Prohttp://www.qnap.com/en/index.php?lang=ensn=822c=351sc=698t=701n=3423 ) # df -h /nasbackup/ Filesystem Size Used Avail Use% Mounted on 172.21.124.65:/nasbackup19T 8.3T 11T 44% /nasbackup I have tested distributed dle that few dle will have compression on server few will have compression on clients. I did not able to observe parallel processing. I can see only one dumping at a time: -bash-4.0$ amstatus DailySet2 | grep dumping bengkulu:/var 0 8g dumping6g ( 73.75%) (11:52:57) wait for dumping: 00g ( 0.00%) dumping to tape : 00g ( 0.00%) dumping : 1 6g 8g ( 73.75%) ( 18.47%) -bash-4.0$ I can see only one file get updated in holding disk. # ls -lh /nasbackup/dumps/amanda1/2* total 7.7G -rw---+ 1 amandabackup disk 7.7G 2013-03-15 12:02 bengkulu._var.0.tmp Can some explain me how to achieve parallel processing ? I am adding more info in next mail. Thanks again. On Thu, Mar 14, 2013 at 9:22 PM, Brian Cuttler br...@wadsworth.org wrote: Amit, I don't think you told us how many client systems, compression can be done on the client or the server. Also, besides the inparallel and maxdump settings, are you short on work area - as Jean-Louis said, the amplot output will help you spot those bottlenecks. Brian On Thu, Mar 14, 2013 at 08:27:11AM -0400, Jean-Louis Martineau wrote: Compression is often a CPU bottleneck, did you check for cpu usage? You can try to use pigz instead of gzip if you have available core. How many dump are you doing in parallel? You can try to increase inparallel, netusage and/or maxdumps. You can use amplot and amstatus to check amanda performance. Jean-Louis On 03/13/2013 10:44 PM, Amit Karpe wrote: Hi all, I am using Amanda to take backup weekly monthly. For monthly backup which is 2.5 to 2.7TB in size after backup with compression, it take 4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs, from 10 different host in network. I am backuping on NAS, where I have 19T total space.) Off course there are various parameter we have to consider to claim whether it is slow process or not. Could you please let me know how should I check and compare whether my backup process is slow or not ? Which are main parameter which affect Amanda Performance ? Which tool I should use to check Amanda Performance ? Currently I am using following steps: 1. I have started monthly backup. 2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup Server to NAS bandwidth usage trafic status. 3. Using iotop I am checking status / speed of io operation. 4. There are other few tools, which may help to understand io, had disk usage. But as my backup directory is not a local device, (I have mounted as nfs directory) I can't run hdparm or iostat directly. 5. Monitoring NAS's admin interface for its bandwidth usage. 6. Currently I am checking for some spastics, which help to compare with my current setup. Still I can't understand whether I going right way or not ! It will be if you help me here. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/ --- Brian R Cuttler brian.cutt...@wadsworth.org Computer
Re: Amanda Performance
Just to more info: -bash-4.0$ rpm -qa | grep amanda amanda-client-2.6.0p2-9.fc11.x86_64 amanda-2.6.0p2-9.fc11.x86_64 amanda-devel-2.6.0p2-9.fc11.x86_64 amanda-server-2.6.0p2-9.fc11.x86_64 -bash-4.0$ uname -a Linux borneo 2.6.30.10-105.2.23.fc11.x86_64 #1 SMP Thu Feb 11 07:06:34 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux On Thu, Mar 14, 2013 at 10:44 AM, Amit Karpe amitka...@gmail.com wrote: Hi all, I am using Amanda to take backup weekly monthly. For monthly backup which is 2.5 to 2.7TB in size after backup with compression, it take 4-5 days. (Total size is around 6-7 TB, and there 52 entries DLEs, from 10 different host in network. I am backuping on NAS, where I have 19T total space.) Off course there are various parameter we have to consider to claim whether it is slow process or not. Could you please let me know how should I check and compare whether my backup process is slow or not ? Which are main parameter which affect Amanda Performance ? Which tool I should use to check Amanda Performance ? Currently I am using following steps: 1. I have started monthly backup. 2. Using bandwidth monitoring tools i.e. ntop, bmon I am checking Backup Server to NAS bandwidth usage trafic status. 3. Using iotop I am checking status / speed of io operation. 4. There are other few tools, which may help to understand io, had disk usage. But as my backup directory is not a local device, (I have mounted as nfs directory) I can't run hdparm or iostat directly. 5. Monitoring NAS's admin interface for its bandwidth usage. 6. Currently I am checking for some spastics, which help to compare with my current setup. Still I can't understand whether I going right way or not ! It will be if you help me here. -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/ -- Regards Amit Karpe. http://www.amitkarpe.com/ http://news.karpe.net.in/
Re: amanda performance
On Fri, 16 Dec 2005 at 9:22am, Roy Heimbach wrote We have amanda 2.4.5p1 server running under debian linux on a dual opteron that's driving a new overland tape library with an ultrium3 drive. For now, the holding disk is a dedicated local 120 GB drive. A single drive is going to have an awfully hard time... scratch that. A single drive can *not* feed an lto3 drive as fast as it wants to be fed (even if that's the only thing it's trying to do). I've got a 4 disk hardware RAID0 feeding my lto3 drive. There's an amanda 2.4.5p1 client also running under debian linux on another dual opteron that's connected to the amanda server host via a dedicated gig network. This host is a moderately loaded fileserver with hardware raid. Backing up a 10 GB test partition, we're seeing dumper and taper performance around 2.5 MB/sec, a fraction of what the hardware is capable of. *snip* Any suggestions would be welcome. Priority one is to figure out where the slowdown is. Bench the hardware RAID with something like bonnie++ and/or tiobench. Ditto for the holding disk. Use tar to write /dev/zero (using your chosen blocksize) to the tape drive. Then, do a test amdump to holding disk. Amflush that dump. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amanda performance
On Friday 16 December 2005 11:42, Joshua Baker-LePain wrote: On Fri, 16 Dec 2005 at 9:22am, Roy Heimbach wrote We have amanda 2.4.5p1 server running under debian linux on a dual opteron that's driving a new overland tape library with an ultrium3 drive. For now, the holding disk is a dedicated local 120 GB drive. A single drive is going to have an awfully hard time... scratch that. A single drive can *not* feed an lto3 drive as fast as it wants to be fed (even if that's the only thing it's trying to do). I've got a 4 disk hardware RAID0 feeding my lto3 drive. Ahh, I'd argue that point Roy. Of the two active drives on this machine, the main one is a 120GB, and I'm using a 200G as virtual disk. These are on seperate cables of the same on board nforce2 controller. As all drives here have dma enabled, the hdparm -tT test returns are typically in the 50-60 mb/second rate. I did have a scsi controller in here at one time but the tape died so it was removed. But while it was installed, I was able to get 20mg/second transfers to another smallish scsi drive hooked up temporarily. The drive was rated as scsi-2-fast, as was the advansys controller. If this user is only getting 2.5mb/sec, something is wrong with the config someplace. I'd start with the hdparm -Tt /dev/whatever tests and see if the dma can be enabled all across the board. There's an amanda 2.4.5p1 client also running under debian linux on another dual opteron that's connected to the amanda server host via a dedicated gig network. This host is a moderately loaded fileserver with hardware raid. Backing up a 10 GB test partition, we're seeing dumper and taper performance around 2.5 MB/sec, a fraction of what the hardware is capable of. *snip* Any suggestions would be welcome. Priority one is to figure out where the slowdown is. Bench the hardware RAID with something like bonnie++ and/or tiobench. Ditto for the holding disk. Use tar to write /dev/zero (using your chosen blocksize) to the tape drive. Then, do a test amdump to holding disk. Amflush that dump. Good advice as it may pin-point the bottleneck. -- Cheers, Gene People having trouble with vz bouncing email to me should use this address: [EMAIL PROTECTED] which bypasses vz's stupid bounce rules. I do use spamassassin too. :-) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2005 by Maurice Eugene Heskett, all rights reserved.
Re: amanda performance
On Fri, 16 Dec 2005 at 12:08pm, Gene Heskett wrote On Friday 16 December 2005 11:42, Joshua Baker-LePain wrote: A single drive is going to have an awfully hard time... scratch that. A single drive can *not* feed an lto3 drive as fast as it wants to be fed (even if that's the only thing it's trying to do). I've got a 4 disk hardware RAID0 feeding my lto3 drive. Ahh, I'd argue that point Roy. Of the two active drives on this machine, the main one is a 120GB, and I'm using a 200G as virtual disk. These are on seperate cables of the same on board nforce2 controller. As all drives here have dma enabled, the hdparm -tT test returns are typically in the 50-60 mb/second rate. I did have a scsi controller in here at one time but the tape died so it was removed. But while it was installed, I was able to get 20mg/second transfers to another smallish scsi drive hooked up temporarily. The drive was rated as scsi-2-fast, as was the advansys controller. Overland's datasheet for my Neo2K says the native transfer rate w/ 1 LTO3 drive is 288GB/hour, which translates to 80MB/s. LTO3 drives are *fast*. AIUI, they'll throttle down to 1/2 that without starting DLT-like shoe-shining behavior, but even 40MB/s is pretty quick. If this user is only getting 2.5mb/sec, something is wrong with the config someplace. I'd start with the hdparm -Tt /dev/whatever tests and see if the dma can be enabled all across the board. Oh, absolutely. He should definitely be getting better speeds than he is. I was simply pointing out that a single disk holding area ain't gonna' cut it in production for lto3. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amanda performance
On Friday 16 December 2005 12:25, Joshua Baker-LePain wrote: On Fri, 16 Dec 2005 at 12:08pm, Gene Heskett wrote On Friday 16 December 2005 11:42, Joshua Baker-LePain wrote: A single drive is going to have an awfully hard time... scratch that. A single drive can *not* feed an lto3 drive as fast as it wants to be fed (even if that's the only thing it's trying to do). I've got a 4 disk hardware RAID0 feeding my lto3 drive. Ahh, I'd argue that point Roy. Of the two active drives on this machine, the main one is a 120GB, and I'm using a 200G as virtual disk. These are on seperate cables of the same on board nforce2 controller. As all drives here have dma enabled, the hdparm -tT test returns are typically in the 50-60 mb/second rate. I did have a scsi controller in here at one time but the tape died so it was removed. But while it was installed, I was able to get 20mg/second transfers to another smallish scsi drive hooked up temporarily. The drive was rated as scsi-2-fast, as was the advansys controller. Overland's datasheet for my Neo2K says the native transfer rate w/ 1 LTO3 drive is 288GB/hour, which translates to 80MB/s. LTO3 drives are *fast*. AIUI, they'll throttle down to 1/2 that without starting DLT-like shoe-shining behavior, but even 40MB/s is pretty quick. If this user is only getting 2.5mb/sec, something is wrong with the config someplace. I'd start with the hdparm -Tt /dev/whatever tests and see if the dma can be enabled all across the board. Oh, absolutely. He should definitely be getting better speeds than he is. I was simply pointing out that a single disk holding area ain't gonna' cut it in production for lto3. That almost sounds like he would need a dedicated hardware raid to use as a holdng disk then. Ouch. And maybe futile unless that same controller can also handle the tape library, in which case the devices could negotiate their own transfers between themselves, at whatever the limiting speed of the cable might be. This library is I take it, a scsi3 wide interface? 320mb/sec rated? -- Cheers, Gene People having trouble with vz bouncing email to me should use this address: [EMAIL PROTECTED] which bypasses vz's stupid bounce rules. I do use spamassassin too. :-) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2005 by Maurice Eugene Heskett, all rights reserved.
Re: amanda performance
On Fri, 16 Dec 2005 at 1:08pm, Gene Heskett wrote That almost sounds like he would need a dedicated hardware raid to use as a holdng disk then. Ouch. And maybe futile unless that same controller can also handle the tape library, in which case the devices could negotiate their own transfers between themselves, at I've got a 4 disk SATA RAID0 on a 3ware controller for the holding disk. whatever the limiting speed of the cable might be. This library is I take it, a scsi3 wide interface? 320mb/sec rated? Yep. In my case, the library is the only thing on the SCSI chain. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amanda performance
Roy Heimbach wrote: Backing up a 10 GB test partition, we're seeing dumper and taper performance around 2.5 MB/sec, a fraction of what the hardware is capable of. Are you sure you are using the holding disk? if both the dumper and taper report the same speed, that's because you don't use it. Jean-Louis
Re: amanda performance
On Friday 16 December 2005 13:30, Joshua Baker-LePain wrote: On Fri, 16 Dec 2005 at 1:08pm, Gene Heskett wrote That almost sounds like he would need a dedicated hardware raid to use as a holdng disk then. Ouch. And maybe futile unless that same controller can also handle the tape library, in which case the devices could negotiate their own transfers between themselves, at I've got a 4 disk SATA RAID0 on a 3ware controller for the holding disk. And what speeds are reported by hdparm -tT /dev/md0? whatever the limiting speed of the cable might be. This library is I take it, a scsi3 wide interface? 320mb/sec rated? Yep. In my case, the library is the only thing on the SCSI chain. Which means the data has to piped thru the pci bus, so the maximum on a non-pci-x buss is 133MB/sec. And the average will be somethat less than that when the handshaking is factored in. But only 2.5MB/Sec says there is a very small pinhole someplace its being forced through. -- Cheers, Gene People having trouble with vz bouncing email to me should use this address: [EMAIL PROTECTED] which bypasses vz's stupid bounce rules. I do use spamassassin too. :-) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2005 by Maurice Eugene Heskett, all rights reserved.
Re: amanda performance
On Fri, 16 Dec 2005 at 3:39pm, Gene Heskett wrote On Friday 16 December 2005 13:30, Joshua Baker-LePain wrote: I've got a 4 disk SATA RAID0 on a 3ware controller for the holding disk. And what speeds are reported by hdparm -tT /dev/md0? How about bonnie++ numbers (thus including FS performance, and ext3 ain't all that stellar a performer): [EMAIL PROTECTED] jlb]$ bonnie++ -f -s 4096 Version 1.03 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP youknowwho.egr.d 4G 157490 74 76721 26 193369 29 361.7 1 --Sequential Create-- Random Create -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1359 72 + +++ + +++ 2050 99 + +++ 6848 98 Yep. In my case, the library is the only thing on the SCSI chain. Which means the data has to piped thru the pci bus, so the maximum on a non-pci-x buss is 133MB/sec. And the average will be somethat less Not true. Standard PCI goes up to 64bit/66MHz, which translates to 500MB/s. The 3ware card is 64/66. The SCSI card is PCI-X at 64bit/133MHz, which is 1000MB/s. than that when the handshaking is factored in. But only 2.5MB/Sec says there is a very small pinhole someplace its being forced through. Oh yes, indeed. -- Joshua Baker-LePain Department of Biomedical Engineering Duke University
Re: amanda performance
On Friday 16 December 2005 15:58, Joshua Baker-LePain wrote: On Fri, 16 Dec 2005 at 3:39pm, Gene Heskett wrote On Friday 16 December 2005 13:30, Joshua Baker-LePain wrote: I've got a 4 disk SATA RAID0 on a 3ware controller for the holding disk. And what speeds are reported by hdparm -tT /dev/md0? How about bonnie++ numbers (thus including FS performance, and ext3 ain't all that stellar a performer): [EMAIL PROTECTED] jlb]$ bonnie++ -f -s 4096 Version 1.03 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP youknowwho.egr.d 4G 157490 74 76721 26 193369 29 361.7 1 --Sequential Create-- Random Create -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1359 72 + +++ + +++ 2050 99 + +++ 6848 98 Yep. In my case, the library is the only thing on the SCSI chain. Which means the data has to piped thru the pci bus, so the maximum on a non-pci-x buss is 133MB/sec. And the average will be somethat less Not true. Standard PCI goes up to 64bit/66MHz, which translates to I've yet to find a pci buss that will run at 66mhz. And I was under the impression it was only 32 bits wide, 4 bytes per buss cycle at 33mhz=132 megabytes/second. Minus handshaking etc. pci-x is of course a different horse that can approach gigabyte performance. 500MB/s. The 3ware card is 64/66. The SCSI card is PCI-X at 64bit/133MHz, which is 1000MB/s. Ah, you didn't say pci-x before. than that when the handshaking is factored in. But only 2.5MB/Sec says there is a very small pinhole someplace its being forced through. Oh yes, indeed. Still true. That bonnie report word wrapped unforch. -- Cheers, Gene People having trouble with vz bouncing email to me should use this address: [EMAIL PROTECTED] which bypasses vz's stupid bounce rules. I do use spamassassin too. :-) Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2005 by Maurice Eugene Heskett, all rights reserved.
Re: amanda performance
Thanks, and thanks also to everyone who replied. Turns out dma was disabled. With dma disabled, read/write rates on the holding disk were a little over 3 MB/sec. The dumper and taper reported different average performance, but both were in the same ballpark, from 2.4 to 2.6 something MB/sec. Not bad at all, considering the speed of the holding disk. Thanks again to everyone who replied. Roy Heimbach -- Roy Heimbach 505-277-8348 / User Services / [EMAIL PROTECTED] University of New Mexico, Albuquerque On Fri, 16 Dec 2005, Gene Heskett wrote: On Friday 16 December 2005 13:30, Joshua Baker-LePain wrote: On Fri, 16 Dec 2005 at 1:08pm, Gene Heskett wrote That almost sounds like he would need a dedicated hardware raid to use as a holdng disk then. Ouch. And maybe futile unless that same controller can also handle the tape library, in which case the devices could negotiate their own transfers between themselves, at I've got a 4 disk SATA RAID0 on a 3ware controller for the holding disk. And what speeds are reported by hdparm -tT /dev/md0? whatever the limiting speed of the cable might be. This library is I take it, a scsi3 wide interface? 320mb/sec rated? Yep. In my case, the library is the only thing on the SCSI chain. Which means the data has to piped thru the pci bus, so the maximum on a non-pci-x buss is 133MB/sec. And the average will be somethat less than that when the handshaking is factored in. But only 2.5MB/Sec says there is a very small pinhole someplace its being forced through.
Re: Amanda performance
On Fri, 11 Nov 2005, Montagni, Giovanni wrote: I have a problem with amanda speed. I have an lto drive with 100gb tape. To fill that tape amanda take more time than necessary, because it takes about 8 hours. LTO speed is ~10 Mb/s, so i expect that the tape is filled up in about 3 hours. I also have another lto drive, on a windows machine and it takes 3 hour to fill tapes. What parameters i have to set-up to speed up amanda? Does the presence of the holding disk influence performance? You should use a holding disk to keep the tape drive at full streaming speed. -- [...] Damned, the Dutch version of the email disclaimer is missing! ;-) Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [EMAIL PROTECTED] In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say programmer or something like that. -- Linus Torvalds
Re: Amanda performance
On Friday 11 November 2005 05:18, Montagni, Giovanni wrote: I have a problem with amanda speed. I have an lto drive with 100gb tape. To fill that tape amanda take more time than necessary, because it takes about 8 hours. LTO speed is ~10 Mb/s, so i expect that the tape is filled up in about 3 hours. I also have another lto drive, on a windows machine and it takes 3 hour to fill tapes. What parameters i have to set-up to speed up amanda? Does the presence of the holding disk influence performance? Without a holding disk, amanda writes the dumps direct to tape, and cannot run several in parallel, which extends the time considerably. The holding disk needs to be maybe 2-4x the size of the largest DLE for best performance. The data is not erased from the holding disk until the file has been completely written, and there needs to be room for the other, smaller DLE's that may get done while the big one is writing. And what about compression? Should i set client best or server best? That depends on which box has the most horsepower, but theres a fudge factor in a multiple box environment that encourages the use of client best because each client then does its own, all of them at the same time, and the network bandwidth usage is also reduced since the files sent are smaller. In my setup, only 2 machines, one is a 500 mhz K6-III, the other is a XP-2800 Athlon. I have the client doing its own compression as its quicker than saddling the athlon with all of it. Thanks in advance. Giovanni -- Il contenuto della presente comunicazione è riservato e destinato esclusivamente ai destinatari indicati. Nel caso in cui sia ricevuto da persona diversa dal destinatario sono proibite la diffusione, la distribuzione e la copia. Nel caso riceveste la presente per errore, Vi preghiamo di informarci e di distruggerlo e/o cancellarlo dal Vostro computer, senza utilizzare i dati contenuti. La presente comunicazione (comprensiva dei documenti allegati) non avrà valore di proposta contrattuale e/o accettazione di proposte provenienti dal destinatario, nè rinuncia o riconoscimento di diritti, debiti e/o crediti, nè sarà impegnativa, qualora non sia sottoscritto successivo accordo da chi può validamente obbligarci. Non deriverà alcuna responsabilità precontrattuale a ns. carico, se la presente non sia seguita da contratto sottoscritto dalle parti. -- The contents of the present communication is strictly confidential and reserved solely to the referred addressees. In the event was received by person different from the addressee, it is forbidden the diffusion, distribution and copy. In the event you have received it mistakenly we ask you to inform us and to destroy and/or to delete it by your computer, without using the data herein contained. The present message (eventual annexes inclusive) shall not be considered any contractual proposal and/or acceptance of offer coming from the addressee, nor waiver neither recognizance of rights, debts and/or credits, and it shall not be binding, when it is not executed a subsequent agreement by person who could lawfully represent us. No pre-contractual liability shall derive to us, when the present communication is not followed by any binding agreement between the parties. -- Cheers, Gene There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. -Ed Howdershelt (Author) 99.36% setiathome rank, not too shabby for a WV hillbilly Free OpenDocument reader/writer/converter download: http://www.openoffice.org Yahoo.com and AOL/TW attorneys please note, additions to the above message by Gene Heskett are: Copyright 2005 by Maurice Eugene Heskett, all rights reserved.