On 2006-05-26 21:35, Matt Ingram wrote:
what is the actual purpose of the estimates? is the estimate very
crucial in amanda's operation ? I've tried the estimate calcsize but
that still seems to be taking forever. Should I be safe trying estimate
server, or could that screw things up ?
Aman
what is the actual purpose of the estimates? is the estimate very
crucial in amanda's operation ? I've tried the estimate calcsize but
that still seems to be taking forever. Should I be safe trying estimate
server, or could that screw things up ?
If I do a flush or dump, and nothing gets wri
On Fri, May 26, 2006 at 01:56:02PM -0400, Matt Ingram wrote:
> here's the send sizesize log. To me it appears that it is just taking
> that long to create the tar. ???
>
> thanks for your response :).
>
tar has great performance issues with directories containing many small
files. I think
here's the send sizesize log. To me it appears that it is just taking
that long to create the tar. ???
thanks for your response :).
sendsize: debug 1 pid 3316 ruid 37 euid 37: start at Fri May 26 00:25:00
2006
sendsize: version 2.4.4
sendsize[3318]: time 0.299: calculating for amname '/ho
On 5/26/06, Matt Ingram <[EMAIL PROTECTED]> wrote:
One of our amanda clients takes an incredibly long time to do estimates
(17000seconds on a level 0, which is about 210GB), and causes the server
to return a estimate timeout error. Seems to be mainly when it tries to
do a level 0 backup. the se
One of our amanda clients takes an incredibly long time to do estimates
(17000seconds on a level 0, which is about 210GB), and causes the server
to return a estimate timeout error. Seems to be mainly when it tries to
do a level 0 backup. the server that takes this amount of time is
accessed t
>It uses it to schedule the level 0,1,2,3 dumps and when in the order a
>particular filesystem will be backed up?
If I understood your question correctly, yes.
In (very) general terms, it gathers estimates for a full dump, the
same level as last time and (sometimes) the next level. Then it uses
On Fri, 20 Jul 2001, John R. Jackson wrote:
> >... I noticed that the estimates are basically tar operations with
> >all of the ouput going to /dev/null. That is what's taking the time.
> >Not to get the file's size, but to move the date through that system
> >pipe into /dev/null.
>
> Nope. Ta
On Fri, 20 Jul 2001, John R. Jackson wrote:
> >My question is this. Why run a separate estimate at all? Why not just
> >monitor the last couple of backups and extrapolate?
>
> That's been suggested before, but nobody has had the time to work on it.
> It should probably be a dumptype option. I'm
On 21 Jul 2001, Marc SCHAEFER wrote:
> Colin Smith <[EMAIL PROTECTED]> wrote:
>
> > I'm running backups on 3 Linux systems, one of the systems is a Cobalt
> > Qube. All the backups are done using GNU tar. It works OK but the
>
> try remounting those fs noatime during estimates:
>
>mount /
Colin Smith <[EMAIL PROTECTED]> wrote:
> I'm running backups on 3 Linux systems, one of the systems is a Cobalt
> Qube. All the backups are done using GNU tar. It works OK but the
try remounting those fs noatime during estimates:
mount / -o remount,noatime
> I was just going to write about this. I have just turned on gnutar
> backups. I noticed that the estimates are basically tar operations with
> all of the ouput going to /dev/null. That is what's taking the time.
> Not to get the file's size, but to move the date through that system
> pipe int
> I was just going to write about this. I have just turned on gnutar
> backups. I noticed that the estimates are basically tar operations with
> all of the ouput going to /dev/null. That is what's taking the time.
> Not to get the file's size, but to move the date through that system
> pipe int
>... I noticed that the estimates are basically tar operations with
>all of the ouput going to /dev/null. That is what's taking the time.
>Not to get the file's size, but to move the date through that system
>pipe into /dev/null.
Nope. Tar knows it is writing to /dev/null and optimizes out the
Wow!
I was just going to write about this. I have just turned on gnutar
backups. I noticed that the estimates are basically tar operations with
all of the ouput going to /dev/null. That is what's taking the time.
Not to get the file's size, but to move the date through that system
pipe into
>My question is this. Why run a separate estimate at all? Why not just
>monitor the last couple of backups and extrapolate?
That's been suggested before, but nobody has had the time to work on it.
It should probably be a dumptype option. I'm perfectly happy with the
way estimates are done on my
I'm running backups on 3 Linux systems, one of the systems is a Cobalt
Qube. All the backups are done using GNU tar. It works OK but the
estimation time on the backups is nasty. I think I'll turn off the
estimation and just run full dumps every day. The Qube is the slow system
with the problem be
17 matches
Mail list logo