Hello!

I have a host that is ~200GB big.  Every week I use the archive function 
via cronjob to create an archive to a removable hard drive--part of my 
off-site backup.

The backup server hardware is not terribly powerful (VIA 1500MHz CPU), 
but it meets all of my needs (especially size and power consumption) 
perfectly, except for this one archive.   The job is configured for as 
little parity as possible:  1%.  However, even then it hits the 1200 
minute (or 72000 second) ClientTimeout value.

My understanding of the ClientTimeout value is that it is used to kill 
backup jobs that have not generated output for more than that timeout 
value.  This is demonstrated by the fact that I've had initial backup 
jobs take *days* to complete successfully. My archive job, though, is not.

The archive job *is* generating output all along:  an update is 
generated for every 0.1%.  Why isn't the archive job resetting the 
ClientTimeout?

I would prefer not to lengthen the ClientTimeout:  it's already 20 
*hours* long.  I just want my archive job to update the ClientTimeout as 
it goes along!

Thoughts?

Tim Massey

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to