Yes, Les is right - if you change compression (on to off, or off to on)
then the next backup will be like a first-time backup - it will be forced
to a full and will not take advantage of any prior backup.

Back to OP - from the rsync_bpc args you sent, it looks like you have
XferLogLevel set to 0.  I'd recommend at least 1 or 2.  That would help
confirm your excludes are correct (they do look pretty comprehensive, but
you should verify the actual files being backed up match your intended
excludes).  As earlier posters pointed out, incorrect excludes could mean
you are trying to backup potentially very large files.

I recommend running strace -p PID -T on the rsync_bpc process to see what
it is up to, and how long various system calls take.  I agree your backups
should run much faster.

What is the version of your remote rsync?  Have you confirmed you are not
running short of server memory?

You could see whether ZFS is the issue by (temporarily) running a new
BackuipPC instance with storage on a different file system (eg, ext4).

Craig

On Wed, Sep 20, 2017 at 9:05 AM, Les Mikesell <lesmikes...@gmail.com> wrote:

> On Wed, Sep 20, 2017 at 10:20 AM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
> > 2017-09-20 17:15 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> >> You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
> >> today's world.  I get bothered when storage is slower than a single 10K
> RPM
> >> drive (~100-120MB/sec).  I wonder how fast metadata operations are.
> >> bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
> >> intensive, and has to read a lot of metadata to properly place files in
> the
> >> CPOOL.   Compare those results with other storage to gauge how well
> your ZFS
> >> is performing.   I'm not a ZFS expert.
> >
> > Yes, is not very fast but keep in mind that i'm using SATA disks.
> > But the issue is not the server performance, because all other software
> are
> > able to backup in a very short time, with the same hardware.
>
> Backuppc uses the disk much more intensively than other systems.  For
> the reasons that you want use it.   And I'd guess that zfs block-level
> compression activity would be fairly inefficient on partial blocks for
> more or less the same reasons there is a hit with raid 5.   I assume
> that your attempt to use the --inplace option reflects problems you've
> noticed with your other backup systems.  If it works, --whole-file
> might be better, given a fast LAN and slow disks.  Reconstructing a
> file from copied bits of the old and merging in the changes is pretty
> expensive.
>
>
> Also, I'm not sure anyone has experience with changing the compression
> level between runs.  If you've done that it might add overhead.
>
> --
>    Les Mikesell
>      lesmikes...@gmail.com
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to