Re: [BackupPC-users] Full backups taking a very long time

2021-03-12 Thread David Williams
No, I don’t have a lot of small files.  As I mentioned, the full backups 
weren’t taking that long before the upgrade to Ubuntu 20.04.  Not sure why an 
OS upgrade would severely increase the time to back up.  The machine, hard 
drives (SSD’s) are all the same.

The version of BackupPC that comes with Ubuntu 20.04 is 3.3.2-3

Regards,
_
Dave Williams
[cid:CD2541C197494118AC1AF4341C4B718C]

On Mar 11, 2021, 10:27 AM -0500, backu...@kosowsky.org , 
wrote:
Sorin Srbu wrote at about 08:31:35 +0100 on Thursday, March 11, 2021:
On Wed, 2021-03-10 at 14:04 +, David Williams wrote:
I have recently upgraded to Ubuntu 20.04 and since then I have noticed that my 
full backups are taking much longer than they used to do. I’m only using 
backuppc to bakup two machines at home. The Ubuntu machine and a Mac laptop. I 
don’t recall exactly how long the full backups were taking previously, but now 
they are taking close to 21 hours. The content on both machines hasn’t changed 
much at all since the upgrade so I was surprised by the increase in time.

A full backup on the Linux machine is around 892MB. This is the local machine 
that Backuppc is installed on. The drive that the backups are stored on is an 
SSD as are most, if not all (sorry can’t remember) of the drives in the Linux 
box. Backup method is tar.

A full backup on the Mac laptop is around 700MB. It’s connected to the same 
router as the Linux machine via ethernet. Backup method is rsync.

I’m not sure how to troubleshoot this increase in timing so any help would be 
much appreciated.

I saw this happening when small files are backed up.

Unless there is a truly pathological number of small files (think tens
if not hundreds of millions), I don't think you can explain a 21 hour
backup period.

My Raspberry PI 4 on a home network backs up my Ubuntu 18.04 with
2.7GB and 321K files in under 12 minutes. And I think that is with
several simultaneous backups.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Conf{RsyncShareName} Sub-directory of a share as a separate share doesn't work ?

2021-03-12 Thread thveillon
Hello, re-posting this as I think my first message didn't reach the 
list, sorry if it did.


Happy user of Backuppc 4 (upgraded from 3) on Debian, I am struggling 
with this specific problem:


I have disks mounted by pam_mount inside users folder hierarchy upon 
login. I am trying to backup these mount points as separate shares from 
the /home/user one (but for the same host). So for a user "foo" and a 
mount on /home/foo/bar, I am trying to get /home/foo (or just /home) and 
/home/foo/bar backed up as separate shares.


If I do:

$Conf{RsyncShareName} = [
  '/',
  '/home',
  '/home/foo/bar'
];

With '--one-file-system' in rsync options, even if I specifically 
exclude '/home/foo/bar' from '/home' share:


$Conf{BackupFilesExclude} = {
  '/home' => [
'/foo/bar',
  ],
...

and list everything under /home/foo/bar in $Conf{BackupFilesOnly} for 
the '/home/foo/bar' share:


'/home/foo/bar' => [
'/bar1',
'/bar2'
  ],
...

it doesn't work.

I end up with /home/foo/bar being saved in the '/home/' share, and error 
"no file dumped for share '/home/foo/bar'. Note that I have no problem 
for other shares like '/' and '/home', they are saved separately.

Am I asking too much of rsync ?
What am I doing wrong ?

Thank you for any idea. (I searched the web so much already I now get 
only previously read pages, and I thought about bind-mounting the shares 
somewhere else too, or creating a duplicate "host" config for every one 
of those shares, but it doesn't feel satisfying).



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-12 Thread Dave Sherohman

On 3/11/21 4:49 PM, Dave Sherohman wrote:

On 3/11/21 4:36 PM, backu...@kosowsky.org wrote:

Look at the code that I recently submitted to the group to streamline
creation/deletion of shadow backups.


I saved those posts, but, honestly, I don't see the advantage of using 
a large script for the per-host config files over having two lines to 
set the pre/postdump commands.  Yes, the pre/postdump commands require 
scripts to be installed on each target host, but those scripts come 
along as part of the overall backuppc client installation that they 
(presumably) need anyhow, in order for rsync and such to be available.
Despite having said that, I did still forward your mail to the windows 
admin, and he said that it looked like your method of handling the 
shadow volumes looked better than how the sourceforge client scripts 
were doing it, so we gave your script a shot and it looks like it worked 
without a hitch.  Thanks for the tip, and for the script!



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] btrfs questions

2021-03-12 Thread Ghislain Adnet

Le 06/03/2021 à 15:24, Paul Leyland a écrit :

I've seen bit-rot on a few disks out of hundreds used over the last
35-ish years.

I am now storing /var/lib/backuppc on a ZFS RAID since the last
catastophic disk failure. Sure enough one of those disks started writing
garbage and then was taken off-line through infant mortality. The pool
kept going. A year or so later a different disk went off-line, with a
dying SATA cable this time. The pool kept going. In both cases
rebuilding the array ("re-silvering") happened automagically.

Very happy with ZFS myself. YMMV.


Curious i have ZFS on my backuppc4 machines and i got horrible performances.

/usr/share/backuppc/bin/BackupPC_refCountUpdate

is completly killing the machine for hours while on btrfs and ext4 it does not 
cause any issue.

i am planning to move out of ZFS as soon as i can because notaime, nosync and 
all optimisation like l2arc on ssd have not helped the problem :(

--
cordialement,
Ghislain



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/