I'm in the middle of my first backup using
estimate server
for all of my samba backups.
Most of the DLEs completed nicely, but a few are still running with
ludicrously large percent complete listed. Upon further reading, it
isn't quite as bad as it first appears.
condor://cardinal/g$
FM wrote:
thanks again :-)
here my updated config :
inparallel 8
netusage 3
maxdumps 10
maxdumps 1 is normal, you may increase maxdumps if you have
a very powerful box.
dumporder
I read about dump at :
http://dump.sourceforge.net/isdumpdeprecated.html
so I'll try it again.
last ques
I'm just starting to use it, due to samba failures. What happens if
the DLE does not have a previous run? Does it revert to the old
standard?
Paul Bijnens <[EMAIL PROTECTED]> writes:
> FM wrote:
>
>> Thanks for all those great comments !
>> Is the a way to tune (or bypass) the estimated time ?
No bandwidth utilized ?
Seem to have pleanty of holding disk.
I think I missed the start of the thread, how many client
systems ? What are maxdumps and inparallel set to ?
What is your compression algorithm ? client/server/hw/none ?
Or some mixture of client/server/none for different clients ?
thanks again :-)
here my updated config :
inparallel 8
netusage 3
maxdumps 10
dumporder
I read about dump at :
http://dump.sourceforge.net/isdumpdeprecated.html
so I'll try it again.
last question ... for new ;-), what do you mean by :
"set the correct "splindle" for each DLE"
thank
FM wrote:
Thanks for all those great comments !
Is the a way to tune (or bypass) the estimated time ?
In amanda 2.4.5 (currently in beta) you can get very fast
(but less accurate) estimates, taking less than a second.
They are based on statistics from the previous runs.
I have not used it myself (y
FM wrote:
here is some info :
We have performance problem with a 47 GB partition (on the backup server).
OS : Linux Redhat Enterprise AS 3 V4
Version : amanda-2.4.4p1
/dev/cciss/c1d0p2 160G 56G 97G 37% /amanda
inparallel 50
50 dumpers!! and amplot shows only a few active. Probably overkil
Thanks for all those great comments !
Is the a way to tune (or bypass) the estimated time ?
Paul Bijnens wrote:
FM wrote:
Some partition have millions of html files
I do not use dump because of this L Torvalds email :
http://lwn.net/2001/0503/a/lt-dump.php3
That was 2001.
In that time ext2dump ha
FM wrote:
Some partition have millions of html files
I do not use dump because of this L Torvalds email :
http://lwn.net/2001/0503/a/lt-dump.php3
That was 2001.
In that time ext2dump had no maintainer.
That has change, and dump has catched up again.
And the main reason (but that counts for Solaris
You have enough amanda work area to allow more than 5 concurrent
dumps to run. This is a separate issue, the second half of the
graph, the 1st half, estimate phase you already have far more capable
help on than I can give.
On Wed, Mar 02, 2005 at 03:38:29PM -0500, FM wrote:
> here is some info :
here is some info :
We have performance problem with a 47 GB partition (on the backup server).
OS : Linux Redhat Enterprise AS 3 V4
Version : amanda-2.4.4p1
/dev/cciss/c1d0p2 160G 56G 97G 37% /amanda
inparallel 50
netusage 3
maxdumps 5
holdingdisk hd1 {
comment "main holding disk"
FM wrote:
Thanks :)
4 h to estimate !
Even 10 hours! + 8 hours to dump the filesystem.
here is the result of amplot -l -e -p amdump.1
You clearly have an estimate problem.
Using gnutar with a filesystem with zillion small files?
But why would the estimate take even longer than the real dump?
Maybe
Thanks :)
4 h to estimate !
here is the result of amplot -l -e -p amdump.1
Paul Bijnens wrote:
FM wrote:
I add the amplot result file of : amplot -l -p amdump.1
What am i suppose to understand in this graph :)
You forgot the "-e" option to amplot to extend the graph beyond the
default of 4 hours.
FM wrote:
I add the amplot result file of : amplot -l -p amdump.1
What am i suppose to understand in this graph :)
You forgot the "-e" option to amplot to extend the graph beyond the
default of 4 hours.
The dump took 18 hours, and apparently the first 4+ hours were entirely
spent by waiting for the
Hello,
I add the amplot result file of : amplot -l -p amdump.1
What am i suppose to understand in this graph :)
20050301.pdf
Description: Adobe PDF document
15 matches
Mail list logo