On Sat, Mar 13, 2010 at 11:58:42PM +0100, Jernej Simončič wrote:

> I'd say this is expected behaviour - the destination saw the file on
> previous run, but didn't see it on current run (because the source
> likely doesn't inform it about files it skips), so it treats the file
> as deleted on source.

Probably so. A corner case then. Even though it would be easy for the source
to inform it about files skipped and avoid this, it's probably not worth the
coding effort.

Another question comes up though. If gzip'ing a huge file can cause a
resonably fast machine to tie up considerable resources for > 30 minutes
because it's logic tells it it's time to gzip a 16g file, it would be good
if there's a way to ask it not to do that. I see that compression can be
turned off for all files, but not how to turn compression off just for the
largest files. Is there some trick that would accomplish that? Basically,
compression on smaller files is always good; compression on the very largest
files almost always bad; and somewhere in between - depending on system
resources - it gets iffy. It would be useful to have a flag to set a
file-size threshold where only files below that would compress.

Whit


_______________________________________________
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Reply via email to