Re: amdump taking a long time
>So how do I get the CVS version (2.4.2 preferably) ? http://amanda.sourceforge.net/fom-serve/cache/136.html http://amanda.sourceforge.net/fom-serve/cache/156.html http://amanda.sourceforge.net/fom-serve/cache/170.html You want "-ramanda-242-branch". You will need a very recent version of automake. I build from CVS: pserver:[EMAIL PROTECTED]:/cvs/automake And I use the stock autoconf-2.13. >Gerhard den Hollander John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amdump taking a long time
* John R. Jackson <[EMAIL PROTECTED]> (Tue, Jul 24, 2001 at 03:15:42PM -0500) > And remember that when you need to make mods to a Makefile, they need to > be to Makefile.am (which is only in the CVS), not Makefile.in. Which, > in turn, means you need to be conversant with automake (another of life's > adventures :-). Which means it's probably a good idea if I get everything out of CVS and try to amke it wokr with the CVs version. So how do I get the CVS version (2.4.2 preferably) ? (I know how to use CVS, I just dont know how to get into amanda CVS) Kind regards, -- Gerhard den Hollander Phone +31-10.280.1515 Global Technical SupportFax +31-10.280.1511 Jason Geosystems BV (When calling please note: we are in GMT+1) [EMAIL PROTECTED] POBox 1573 visit us at http://www.jasongeo.com 3000 BN Rotterdam JASON...#1 in Reservoir CharacterizationThe Netherlands This e-mail and any attachment is/are intended solely for the named addressee(s) and may contain information that is confidential and privileged. If you are not the intended recipient, we request that you do not disseminate, forward, distribute or copy this e-mail message. If you have received this e-mail message in error, please notify us immediately by telephone and destroy the original message.
Re: amdump taking a long time
>>(comments about making configure.in changes) >>... >I haven't, but it should not be too hard >(famous last words ;0 ) No kidding! :-) As I said, poke around for how readline is handled, and steal, steal, steal :-). And remember that when you need to make mods to a Makefile, they need to be to Makefile.am (which is only in the CVS), not Makefile.in. Which, in turn, means you need to be conversant with automake (another of life's adventures :-). >Calcsize works fine for ufsdump, xfsdump and the dump on Sunos 4.1. >(though xfsdump is faster [when I tested it (a year ago orso)]) When I suggested disabling calcsize for dump, it was because I was not sure it would be faster, especially when only two estimates are requested (dump can estimate a level 0 very, very quickly). Also, calcsize alters the access time on the directories it traverses. That's not nearly as big a deal as altering the file access times, but still might be an issue for some folks. I realize the final answer is probably close enough (it better be :-). So I guess since the default will be "no" (don't use calcsize), we can leave the program test alone and allow it to be used for "DUMP" without warning. Speaking of getting the right answer, I haven't looked at the calcsize code, but I assume it handles files with multiple links and files with holes properly? > Gerhard John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amdump taking a long time
* John R. Jackson <[EMAIL PROTECTED]> (Mon, Jul 23, 2001 at 01:30:37PM -0500) >>If I get it working, would you include it ? > Absolutely. Great. >>We would need to link against the libtar.a from tar 1.13.19 alpha though. >>Would that be a problem ? > Not if it's done right :-). > I'd suggest something like this: > * Add library tests to configure.in ala the readline ones to see if > libtar is available (I can help if you've never had the joy of > working with autoconf). I haven't, but it should not be too hard (famous last words ;0 ) [8<] > * Finally, I think a dumptype option should be added to control > whether calcsize is used ("calcsize ", default is "no"). > I'm not sure what should happen if program is not "GNUTAR", or if > it appears to be a Samba entry. Calcsize works fine for ufsdump, xfsdump and the dump on Sunos 4.1. (though xfsdump is faster [when I tested it (a year ago orso)]) It will likely give more or less correct answers for Linux dump as well though I don;t have any linux boxes with an ext2 filesystem (except for the /boot ). More or less correct as in, the estimates are within a few % of the final reported dumpsize, and the differences between the reported size and the dumped size can easily be caused by the fact that the filesystem was life. Currently listening to: The Tea Party - One () Gerhard, (@jasongeo.com) == The Acoustic Motorbiker == -- __O Wake up in the morning,when the sun starts to shine =`\<, My heads still aching and my lips taste of wine (=)/(=) Trying to remember, what we both did last night.
Re: amdump taking a long time
>If I get it working, would you include it ? Absolutely. >We would need to link against the libtar.a from tar 1.13.19 alpha though. > >Would that be a problem ? Not if it's done right :-). I'd suggest something like this: * Add library tests to configure.in ala the readline ones to see if libtar is available (I can help if you've never had the joy of working with autoconf). * That will define (or undef) HAVE_LIBTAR which can be used in the code to make the calls or not. * Appropriate changes need to be made to client-src/Makefile.am to include libtar on the calcsize build line, if available (again, steal the template from readline). * If it's not there already, the appropriate changes to pass the exclude list from planner to calcsize need to be added. This should just be a matter of stealing whatever is done in sendsize. * Calcsize needs to get the exclude option from the packet. Again, stealing the code from sendsize would be appropriate. If it is not build with HAVE_LIBTAR it should whine (not sure if this should be fatal or not -- I'm guessing not, just a warning). * Appropriate libtar calls need to be made to process the exclusions. * Finally, I think a dumptype option should be added to control whether calcsize is used ("calcsize ", default is "no"). I'm not sure what should happen if program is not "GNUTAR", or if it appears to be a Samba entry. > Gerhard John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amdump taking a long time
* John R. Jackson <[EMAIL PROTECTED]> (Fri, Jul 20, 2001 at 11:40:12AM -0500) >>calcsize does all 3 estimates in one go ... > But it does not support exclusion patterns. That's the one major > thing holding me back from supporting it. It doesn't ? If you compile with -DBUILTIN_EXCLUDE_SUPPORT it should. (OK, on second thought, I see it doesn't. most of the hooks are there though .. Hmm, linking it with libtar.a from tar.1.13.19 almost fixes this. ) > Now if someone wants to figure out how to link calcsize with the GNU > tar pattern matching code ... :-) OK, Time to put my coding where my mouth is ;) If I get it working, would you include it ? We would need to link against the libtar.a from tar 1.13.19 alpha though. Would that be a problem ? Currently listening to: Tool - Disposition (Lateralus) Gerhard, <@jasongeo.com> == The Acoustic Motorbiker == -- __O Standing above the crowd, he had a voice so strong and loud =`\<, we'll miss him (=)/(=) Ranting and pointing his finger, At everything but his heart we'll miss him
Re: amdump taking a long time
>calcsize does all 3 estimates in one go ... But it does not support exclusion patterns. That's the one major thing holding me back from supporting it. Now if someone wants to figure out how to link calcsize with the GNU tar pattern matching code ... :-) > Gerhard John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
Re: amdump taking a long time
* John R. Jackson <[EMAIL PROTECTED]> (Thu, Jul 19, 2001 at 10:53:03PM -0500) >>The Apollo fails to return an estimate and fails to backup. The estimates >>are taking about 6hrs to complete. ... > Ick. I'm sure that's because GNU tar does not perform estimates as fast > as ufsdump can. > One possibility, if you're bound and determined to switch :-), would be > to get GNU tar 1.13.19 from alpha.gnu.org. That seems to be a stable > version and may perform the estimates better. But not by much, gnutar has to run the estimates 3 times (level 0, current level, cunnrent level + 1). with over 500G disk that's easy 6 hours > There is also the calcsize approach, but let's leave that for the moment. calcsize does all 3 estimates in one go, is more than 2ce as fast per estimate then gtar. Doing estimates on 600+G witch calcsize takes a bit less than 45 minutes on a Sun UE450, the gtar stuff took well over 4 hours for just 2 levels (level 0 and level 1, since current level was also level 0). At that point I broke down, hacked the amanda source a bit and switched to calcsize. >>How do I speed the size estimation up ... > Don't use GNU tar. Or use calcsize ;) Currently listening to: CD Audio Track 07 Gerhard, <@jasongeo.com> == The Acoustic Motorbiker == -- __O Some say the end is near. =`\<, Some say we'll see armageddon soon (=)/(=) I certainly hope we will I could use a vacation
Re: amdump taking a long time
>... The tape >device is a SUN L9 (DLT8000 with autochanger) 40GB (/dev/rmt/1n). ... We'll come back to this later ... >I have managed to backup Helios and parts of Apollo using dump uncompressed. >I want to backup the user partitions compressed with gtar. ... Why do you want to switch to gtar? It seems to be the root of your problems. And adding software compression (GNU tar or not) is going to kill you. A single typical 40 GByte full dump here takes 6+ hours just to do the software compression (on an E4500 or better). With the amount of data you have, I'd stick with hardware (tape drive) compression. And make sure you don't accidentally do both -- writing software compressed data to a hardware compressing drive actually uses more space than either one by itself. >The Apollo fails to return an estimate and fails to backup. The estimates >are taking about 6hrs to complete. ... Ick. I'm sure that's because GNU tar does not perform estimates as fast as ufsdump can. One possibility, if you're bound and determined to switch :-), would be to get GNU tar 1.13.19 from alpha.gnu.org. That seems to be a stable version and may perform the estimates better. There is also the calcsize approach, but let's leave that for the moment. >... Helios partitions that don't fit on the >holding disk take forever to put on tape. ... Using GNU tar or ufsdump? Could you find the amdump. file that goes along with the other files you sent and pass that along, too? It contains all the timing information. >... There is also an Index Tee [broken pipe] error on Apollo. That probably corresponds to a tape error indicating you hit EOT. When Amanda hits EOT and it is running direct to tape (not writing a file from the holding disk, but getting it directly from the client), it shuts down the client connection, which triggers the broken pipe message. Essentially it can be ignored as a side effect. I noticed the following line in the sendsize*debug file you sent for the full dump estimate of apollo:/export: Total bytes written: 78385356800 Unless you get some amazing compression, you're not going to get this on a single tape, and since Amanda doesn't yet do tape overflow, it's going to be a problem. This is the one reason you might have to go to GNU tar (although you'd only have to do it for this one file system). >How do I speed the size estimation up ... Don't use GNU tar. >... without adding holding disk how do I speed up the dumping to tape? ... Not sure about that one, yet. The amdump. file will explain better when Amanda is "fast" and when it is "slow". >... How long should a backup of this size take (see >helios_partition and apollo_partition)? It depends on what level backups Amanda picks (i.e. how much data is actually backed up). Here are last night's numbers from one of our large configurations (Sun E4500 (or better) class machine, 400 MHz CPU's, DLT7000 drives, large -- 100 GByte -- holding disk, ufsdump/vxdump and tape/hardware compression): Estimate Time (hrs:min)2:06 Run Time (hrs:min) 9:10 Avg Dump Rate (k/s) 4164.5 6446.9 3687.6 Tape Time (hrs:min)6:54 1:51 5:03 Tape Size (meg) 148419.539705.7108713.8 Avg Tp Write Rate (k/s) 6122.1 6105.8 6128.1 The next two "smaller" (although not by much) configurations get roughly the same performance. >Sheldon Knight John R. Jackson, Technical Software Specialist, [EMAIL PROTECTED]
amdump taking a long time
Hi, I have recently installed amanda and have not as yet managed to successfully obtain a backup to my satisfaction. I have a backup server (Helios) which also has an amanda client and a remote amanda client (Apollo). The tape device is a SUN L9 (DLT8000 with autochanger) 40GB (/dev/rmt/1n). Helios is a SUN Ultra1 single CPU with 512Mb memory; not a large machine (runs email, samba, web and file serving). Apollo is a SUN E420R with 3 CPU and 1GB memory (runs compilers and used for software development). I have other servers I wish to backup as well. I have managed to backup Helios and parts of Apollo using dump uncompressed. I want to backup the user partitions compressed with gtar. I believe I have applied the tar-1.12 patch. The Apollo fails to return an estimate and fails to backup. The estimates are taking about 6hrs to complete. Helios partitions that don't fit on the holding disk take forever to put on tape. The DLT does not stream as I would have expected. There is also an Index Tee [broken pipe] error on Apollo. I have a parallel backup of Helios only to exabyte (/dev/rmt/0u) and to RAID5 disks using gtar with hardware compresson that takes about 4 hours to complete. This uses a very basic script. I wish to replace the script with amanda. How do I speed the size estimation up and without adding holding disk how do I speed up the dumping to tape? Can I improve the streaming? Anything else I should be doing to improve things; I know I should be using Amanda-2.4.2p2 rather than Amanda-2.4.2p1 but I would like to get a decent backup first. How long should a backup of this size take (see helios_partition and apollo_partition)? the attachments are gzip'd, tar'd *.debug files. TIA Sheldon Knight Systems Administrator Open Telecommunications (NZ) Limited P.O. Box 10-388 DDI:+64 4 495 9161 Clayton Ford House CELL: +64 25 540 854 128-132 The Terrace FAX:+64 4 495 8419 Wellington Email:[EMAIL PROTECTED] New Zealand www.opentelecommunications.com apollo.amanda.debug.tar.gz helios.amanda.debug.tar.gz helios_conf.tar.gz