Re: [BackupPC-users] How to check pool?

2024-02-20 Thread Alexander Kobel
Hi Christian,

`dmesg` will show you where the checksum errors occur. Can be combined with a 
`btrfs scrub` to get a full report over your entire volume.
If you see entries in the BackupPC pool or pc directories, that'll generally 
mean that the affected file(s) are broken and can't be repaired from within the 
backup system. Short of deep-diving on the individuals bits and hexes on the 
disk for partial recovery, not a lot you can do about it; so if possible, 
delete them and have them re-back-uped next time.
Note that without deleting them, v4 will not look at the contents of the files, 
IIUC - given the filename/overall hash, it will just assume that the pool file 
and the existing file on the client match and not retransfer. Only exception is 
if the file changed or freshly appeared on the client, in which case it will be 
compared against the pool file. So you'll usually not see a checksum error 
during backup, only on the nightly scans where the contents of the pool files 
are read and their hash is re-calculated.

Unfortunately, it is non-trivial and time-consuming to find where a file/hash 
from the pool is referenced in backups. If I remember correctly, you need to 
walk the attrib files for that.
I (think it was me who) wrote a utility for that ages ago; I can't even 
remember. But it being based on zsh looks like it's not part of the BackupPC 
distro, and could stem from me... Horrible coding, slow, documentation=code, 
certainly not industry strength, all support my authorship. ;-)
To be executed in the BackupPC topdir, as far as I remember; it should roughly 
print you where a file with a certain hash appears in your pool. But I guess my 
use case was similar to yours. Timestamp is from 2019 and I didn't seem to have 
used it since then. Which probably means it's been tested and written only with 
v3; not sure whether it still works for v4 without changes, but you can try.

Please find it attached, and check whether it helps you something. No promises, 
it might eat your data and your cat (though it shouldn't).


Cheers,
Alex


February 18, 2024 at 9:52 AM, "Christian Völker via BackupPC-users" 
mailto:backuppc-users@lists.sourceforge.net?to=%22Christian%20V%C3%B6lker%20via%20BackupPC-users%22%20%3Cbackuppc-users%40lists.sourceforge.net%3E>>
 wrote:

Hi all,


I have a large v4 pool (not cpool) (1.6TB) running on top of btrfs.


Is there any chance to perform a pool check from BackupPC to verify all 
data in the pool is still ok?


I am getting some checksum errors from btrfs and I want to know if the 
backed up data is still fine.


Thanks!


/KNEBB






___

BackupPC-users mailing list

BackupPC-users@lists.sourceforge.net
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:
https://github.com/backuppc/backuppc/wiki
Project:
https://backuppc.github.io/backuppc/



#!/bin/zsh

hash=$1
TOPDIR=$(pwd)

hashaa=$(printf %02x $((0x${hash[1,2]} & 0xfe)))

setopt nullglob

hosts=()
for i in pc/*/; do
host=${i:t}
find $TOPDIR/pc/$host/refCnt -maxdepth 1 -name poolCnt.'[01]'.$hashaa -type 
f -print0 2>/dev/null | while read -d$'\0' poolCnt; do
BackupPC_poolCntPrint $poolCnt | grep $hash > /dev/null && hosts+=$host
done
done

echo matching hosts: $hosts

typeset -A host_nums
for host in $hosts; do
nums=()
for i in pc/$host/[0-9]*/; do
num=${i:t}
find $TOPDIR/pc/$host/$num/refCnt -maxdepth 1 -name 
poolCnt.'[01]'.$hashaa -type f -print0 2>/dev/null | while read -d$'\0' 
poolCnt; do
count=$(BackupPC_poolCntPrint $poolCnt | grep $hash | wc -l)
[[ $count -gt 0 ]] && {
nums+=$num
host_nums[$host,$num]=$count
}
done
done
done

echo matching backups:
for host_num count in ${(kv)host_nums}; do
echo "  $host_num ($count occurences)" | sed -e 's/,/#/'
done | sort

echo searching backups:
for host_num count in ${(kv)host_nums}; do
host=${host_num%,*}
num=${host_num##*,}
echo "  searching $host#$num"
find $TOPDIR/pc/$host/$num -maxdepth 1 -name 'f%2f*' -type d -print0 
2>/dev/null | while read -d$'\0' sharePath; do
share=${sharePath:t}
share=${share[2,-1]}
share=${share:gs/%10/\\n}
share=${share:gs/%13/\\r}
share=${share:gs/%2f/\/}
share=${share:gs/%25/%}
echo "searching $host:$share#$num"
BackupPC_ls -R $sharePath | grep $hash
done
done
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Invalid config.pl via configuration web interface

2022-11-09 Thread Alexander Kobel
Hi Iosif,

spot on, thanks a lot - problem solved for me, and reported upstream for Arch 
Linux (https://bugs.archlinux.org/task/76499).


Cheers,
Alex


On 11/9/22 13:44, Iosif Fettich wrote:
> Hi  Alexander,
> 
> here's what probably has bitten yoou:
> 
> ---
> 
> Date: Fri, 15 Apr 2022 11:45:54 -0700
> From: Craig Barratt 
> Reply-To: backuppc/backuppc 
> 
> To: backuppc/backuppc 
> Cc: Iosif Fettich , Mention 
> Subject: [backuppc/backuppc] Config write fails with Data::Dumper versions >= 
> 2.182 (Issue #466)
> Parts/Attachments:
>    1   OK 11 lines  Text
>    2 Shown    23 lines  Text
> 
> 
> 
> @ifettich [github.com] discovered that config file writing fails with 
> Data::Dumper versions > 2.178 due to a typo in the Data::Dumper->new() call. 
> The second argument is missing a qw() wrapper. This was benign up to around 
> Data::Dumper version < 2.182, but some changes to the XS library since then 
> expose the long-time bug in BackupPC.
> 
> Because Data::Dumper is used in terse mode, there's no need to provide the 
> variable name in the second argument. So the fix is to simply remove the 2nd 
> argument. That fix is backward compatible with older versions of Data::Dumper.
> 
> ---
> 
> Hope this helps a little bit. Mosty probably updating your BackupPC to the 
> corrected version is all you need to do (besides restoring the settings that 
> you had in use)
> 
> Best regards,
> 
> Iosif Fettich
> 
> 
> 
> 
> 
> 
> On Wed, 9 Nov 2022, Alexander Kobel wrote:
> 
>> Dear all,
>>
>> I receive validation errors of my config file after changes to the (global) 
>> config in the web interface. Consequently, BackupPC terminates.
>>
>> I'm absolutely sure that this worked before; my last (host) config change 
>> dates back to Feb 2022, the last global config change happened mid 2020. 
>> Unfortunately, I can't pinpoint a specific culprit (system) update anymore. 
>> Normal operation is not affected, so I didn't spot the issue earlier; just 
>> undoing the most recent perl-related updates from today's regular update 
>> does not help.
>>
>> The issue is that upon changing the main config or, e.g., adding a host, 
>> HASH or ARRAY entries in the config file are written with parentheses rather 
>> than braces or brackets, as expected. In turn, I receive
>>
>>> Software error:
>>>
>>> Not an ARRAY reference at /usr/share/backuppc/lib/BackupPC/CGI/Lib.pm line 
>>> 468.
>>
>> or similar messages on operations that re-read the config, accompanied by 
>> crashes of the server. Attached is a diff of the config folder, with entries 
>> like
>>
>> 2431c2431
>> < $Conf{ClientShareName2Path} = {};
>> ---
>>> $Conf{ClientShareName2Path} = ();
>> 2433c2433
>> < $Conf{RsyncIncrArgsExtra} = [];
>> ---
>>> $Conf{RsyncIncrArgsExtra} = ();
>>
>> I can fix the config manually and the server starts again; however, I'm not 
>> 100% confident whether some log/configuration data is written periodically, 
>> e.g. on nightlies, and more dragons hide behind the scenes.
>>
>> Did anyone experience a similar problem? Any know incompatibilities with one 
>> of the more recent perl packages? Any clues what might be the problem?
>>
>> For context, I'm on Arch, pretty much up-to-date; relevant versions of 
>> BackupPC, web server and dependencies are
>>
>> backuppc 4.4.0-5
>> lighttpd 1.4.67-1
>>
>> glibc 2.36-6
>> popt 1.18-3
>> perl 5.36.0-1
>> par2cmdline 0.8.1-2
>> perl-archive-zip 1.68-7
>> perl-io-dirent 0.05-15
>> perl-file-listing 6.15-2
>> perl-time-modules 2013.0912-8
>> perl-cgi 4.54-2
>> perl-xml-rss 1.62-1
>> perl-json-xs 4.03-3
>> postfix 3.7.3-2
>>
>>
>> Thanks and cheers,
>> Alex
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/


smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Invalid config.pl via configuration web interface

2022-11-09 Thread Alexander Kobel
Dear all,

I receive validation errors of my config file after changes to the (global) 
config in the web interface. Consequently, BackupPC terminates.

I'm absolutely sure that this worked before; my last (host) config change dates 
back to Feb 2022, the last global config change happened mid 2020. 
Unfortunately, I can't pinpoint a specific culprit (system) update anymore. 
Normal operation is not affected, so I didn't spot the issue earlier; just 
undoing the most recent perl-related updates from today's regular update does 
not help.

The issue is that upon changing the main config or, e.g., adding a host, HASH 
or ARRAY entries in the config file are written with parentheses rather than 
braces or brackets, as expected. In turn, I receive

> Software error:
> 
> Not an ARRAY reference at /usr/share/backuppc/lib/BackupPC/CGI/Lib.pm line 
> 468.

or similar messages on operations that re-read the config, accompanied by 
crashes of the server. Attached is a diff of the config folder, with entries 
like

2431c2431
< $Conf{ClientShareName2Path} = {};
---
> $Conf{ClientShareName2Path} = ();
2433c2433
< $Conf{RsyncIncrArgsExtra} = [];
---
> $Conf{RsyncIncrArgsExtra} = ();

I can fix the config manually and the server starts again; however, I'm not 
100% confident whether some log/configuration data is written periodically, 
e.g. on nightlies, and more dragons hide behind the scenes.

Did anyone experience a similar problem? Any know incompatibilities with one of 
the more recent perl packages? Any clues what might be the problem?

For context, I'm on Arch, pretty much up-to-date; relevant versions of 
BackupPC, web server and dependencies are

backuppc 4.4.0-5
lighttpd 1.4.67-1

glibc 2.36-6
popt 1.18-3
perl 5.36.0-1
par2cmdline 0.8.1-2
perl-archive-zip 1.68-7
perl-io-dirent 0.05-15
perl-file-listing 6.15-2
perl-time-modules 2013.0912-8
perl-cgi 4.54-2
perl-xml-rss 1.62-1
perl-json-xs 4.03-3
postfix 3.7.3-2


Thanks and cheers,
Alexdiff -r backuppc/config.pl backuppc.broken/config.pl
116c116
< $Conf{WakeupSchedule} = [
---
> $Conf{WakeupSchedule} = (
213c213
< ];
---
> );
420c420
< $Conf{DHCPAddressRanges} = [];
---
> $Conf{DHCPAddressRanges} = ();
639c639
< $Conf{FullKeepCnt} = [
---
> $Conf{FullKeepCnt} = (
647c647
< ];
---
> );
756c756
< $Conf{BackupFilesOnly} = {};
---
> $Conf{BackupFilesOnly} = ();
812c812
< $Conf{BackupFilesExclude} = {};
---
> $Conf{BackupFilesExclude} = ();
885c885
< $Conf{BlackoutPeriods} = [
---
> $Conf{BlackoutPeriods} = (
899c899
< ];
---
> );
1003c1003
< $Conf{SmbShareName} = [
---
> $Conf{SmbShareName} = (
1005c1005
< ];
---
> );
1121c1121
< $Conf{TarShareName} = [
---
> $Conf{TarShareName} = (
1123c1123
< ];
---
> );
1267c1267
< $Conf{RsyncSshArgs} = [
---
> $Conf{RsyncSshArgs} = (
1270c1270
< ];
---
> );
1287c1287
< $Conf{RsyncShareName} = [
---
> $Conf{RsyncShareName} = (
1289c1289
< ];
---
> );
1326c1326
< $Conf{RsyncFullArgsExtra} = [
---
> $Conf{RsyncFullArgsExtra} = (
1328c1328
< ];
---
> );
1334c1334
< $Conf{RsyncArgs} = [
---
> $Conf{RsyncArgs} = (
1352c1352
< ];
---
> );
1386c1386
< $Conf{RsyncArgsExtra} = [
---
> $Conf{RsyncArgsExtra} = (
1394c1394
< ];
---
> );
1424c1424
< $Conf{RsyncRestoreArgs} = [
---
> $Conf{RsyncRestoreArgs} = (
1440c1440
< ];
---
> );
1477c1477
< $Conf{FtpShareName} = [
---
> $Conf{FtpShareName} = (
1479c1479
< ];
---
> );
2234c2234
< $Conf{CgiNavBarLinks} = [
---
> $Conf{CgiNavBarLinks} = (
2250c2250
< ];
---
> );
2255c2255
< $Conf{CgiStatusHilightColor} = {
---
> $Conf{CgiStatusHilightColor} = (
2263c2263
< };
---
> );
2290c2290
< $Conf{CgiExt2ContentType} = {};
---
> $Conf{CgiExt2ContentType} = ();
2328c2328
< $Conf{CgiUserConfigEdit} = {
---
> $Conf{CgiUserConfigEdit} = (
2425c2425
< };
---
> );
2431c2431
< $Conf{ClientShareName2Path} = {};
---
> $Conf{ClientShareName2Path} = ();
2433c2433
< $Conf{RsyncIncrArgsExtra} = [];
---
> $Conf{RsyncIncrArgsExtra} = ();


smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Conf{PoolNightlyDigestCheckPercent} on btrfs and zfs

2021-05-08 Thread Alexander Kobel
Hi Guillermo,

I agree. BackupPC's RefCnt/Fsck is a must IMHO.

A main difference between PoolNightlyDigestCheck and btrfs' builtin 
checksumming is that the nightly digest checks proactively scan the data and 
will tell you about broken files as soon as they break (well, as soon as the 
next check is done after the broke). Btrfs will only complain on retrieval; 
perhaps soon, perhaps never. So you need a way to make sure that your archived 
data is actually read every now and then, and compared to the checksums for 
consistency.

For btrfs, this is done by the "scrub" operation. Which is recommended to be 
done regularly.
In my experience, it's slightly lower on the CPU load than BackupPC's check, 
but the actual load should be on I/O to the disk, not the CPU. Default CRC32 is 
somewhat weaker than the MD5s used by BackupPC, but this is a concern for only 
pool collisions, not for data rot. Newer btrfs' features include, e.g., xxhash 
checksums, which are not only faster but also more collision resistant than MD5.

I personally use btrfs scrub exclusively (which also check's my non-BackupPC 
files on that disk) and disabled NightlyDigestCheck. However, there is one 
significant drawback that might or might not be an issue for you: btrfs scrub 
is all-or-nothing; it always scans the entire partition and, to be frank, the 
"idle priority scheduler" does not play nicely on every system. In contrast, 
BackupPC's nightly checks are more clever in that you can scan parts of your 
pool every night, to distribute the load on your system.

So, IMHO, the best approach is to have a separate partition/device exclusively 
for your pool, use NightlyDigestCheck there, and *disable* (or not enable) 
regular btrfs scrub. (You can keep checksumming active nevertheless, it's 
extremely low overhead.)
A scrub does oh-so-slightly more, as it also checks metadata checksums and can 
repair corrupted blocks if good copies are available - but problems will also 
be detected by a BackupPC nightly scan, and corrective action can be taken.


For any of those, make sure you receive notifications by mail, monitoring, or 
LEDs above your pillow in case there are errors. Spending a lot of time on 
checksumming, reading SMART data, health checks etc. isn't going to help unless 
you know about the results.


For ZFS, no clue.


Best,
Alex


On 5/4/21 6:38 PM, Guillermo Rozas wrote:
> One ensures against file system bit rot, the other ensures backup file
> consistency.
> 
> 
> I would say $Conf{PoolNightlyDigestCheckPercent} = 1 is also a check for bit 
> rot, as the only thing it does is to read the file, re-calculate the md5 
> checksum, and compares it with its name (which is the md5 calculated at the 
> time of writing). It actually says it in the help, "This is check if there 
> has been any server file system corruption."
> 
> What controls the consistency of the backup are 
> $Conf{PoolSizeNightlyUpdatePeriod} and specially $Conf{RefCntFsck}.
> 
> Regards,
> Guillermo
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] double hop rsync

2021-03-16 Thread Alexander Kobel
Hi Greg,

On 3/16/21 4:27 PM, gregrwm wrote:
> On Tue, Mar 16, 2021 at 8:45 AM  > wrote:
> 
> gregrwm wrote at about 19:59:53 -0500 on Monday, March 15, 2021:
>  > i'm trying to use a double hop rsync to backup a server that can only 
> be
>  > reached indirectly.  a simple test of a double hop rsync to the target
>  > server seems to work:
>  >
>  >   #  sudo -ubackuppc rsync -PHSAXaxe"ssh -xq 192.168.128.11 ssh -xq"
>  > --rsync-path=sudo\ /usr/bin/rsync 
> 192.168.1.243:/var/log/BackupPC/.bashrc
>  > /tmp
>  > receiving incremental file list
>  > .bashrc
>  >             231 100%  225.59kB/s    0:00:00 (xfr#1, to-chk=0/1)
>  >   0#
>  >
>  > which demonstrates that the backuppc keys, sudo settings, and double 
> hop
>  > rsync all work.
>  >
>  > here's my double hop settings:
>  > $Conf{RsyncClientCmd} = 'ssh -xq 192.168.128.11 ssh -xq 192.168.1.243 
> sudo
>  > /usr/bin/rsync $argList+';
>  > $Conf{ClientNameAlias} = '192.168.128.11';
> 
> Why don't you try using the 'jump' host option on ssh.
> -J 192.168.128.11
> 
> 
> seems like a really good idea.  so i tried:
> 
> $Conf{RsyncClientCmd} = 'ssh -xqJ192.168.128.11 sudo /usr/bin/rsync 
> $argList+';
> $Conf{ClientNameAlias} = '192.168.1.243';
> 
> and got:
> Got remote protocol 1851877475
> Fatal error (bad version): channel 0: open failed: connect failed: Name or 
> service not known
> stdio forwarding failed
> Can't write 1298 bytes to socket
> fileListReceive() failed
> 
> if you've any ideas how to tweak that and try again i'm eager,

any luck with the ProxyJump config option? I use this in my BackupPC user's 
~/.ssh/config to keep the BackupPC config as clean as possible.
See, e.g., https://wiki.gentoo.org/wiki/SSH_jump_host#Multiple_jumps

Probably, in your case it would be something like

Host client
HostName192.168.1.243
ProxyJump   192.168.1.243


HTH,
Alex

> thank you,
> greg
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Full backups taking a very long time

2021-03-11 Thread Alexander Kobel
Hi Dave,

does each client individually take so long, or is it just one? (Or perhaps one 
client is taking almost all resources, so the other one is slow.)

If both clients suddenly have problems, an issue with the BackupPC server is 
likely.

Otherwise, I've seen similar issues when suddenly some service on the client 
decided to include a virtual memory map file somewhere in /var/run. Which 
pretends to be of **huge** size, like 2 TB or something. And I didn't use the 
-x, --one-file-system flag for rsync.
The file mostly consists of (implicitly represented) zeros and compresses 
really nicely, so BackupPC doesn't choke on it, technically; but the processing 
takes forever, and eventually it always hit some timeout.

So I recommend a `sudo find /share -size +1G` to find potential huge files that 
you don't expect to be in the backups.

Or try `sudo ls -l /proc/$(pgrep rsync)/fd` on the clients during backup and 
try to see whether rsync still progresses, and if not, on which file it idles. 
Same for tar, obviously.


Cheers,
Alex


On 3/10/21 3:04 PM, David Williams wrote:
> I have recently upgraded to Ubuntu 20.04 and since then I have noticed that 
> my full backups are taking much longer than they used to do.  I’m only using 
> backuppc to bakup two machines at home.  The Ubuntu machine and a Mac laptop. 
>  I don’t recall exactly how long the full backups were taking previously, but 
> now they are taking close to 21 hours.  The content on both machines hasn’t 
> changed much at all since the upgrade so I was surprised by the increase in 
> time.
> 
> A full backup on the Linux machine is around 892MB.  This is the local 
> machine that Backuppc is installed on.  The drive that the backups are stored 
> on is an SSD as are most, if not all (sorry can’t remember) of the drives in 
> the Linux box.  Backup method is tar.
> 
> A full backup on the Mac laptop is around 700MB.  It’s connected to the same 
> router as the Linux machine via ethernet.  Backup method is rsync.
> 
> I’m not sure how to troubleshoot this increase in timing so any help would be 
> much appreciated.
> 
> Regards,
> _
> *Dave Williams*
> 
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-04 Thread Alexander Kobel

Hi,

On 2/4/21 5:02 AM, Kenneth Porter wrote:

On 2/3/2021 6:54 PM, backu...@kosowsky.org wrote:

I just built backuppc for my Raspberry PI and ordered an external SSD
drive that I plan to format in btrfs.


I'm using CentOS, and it looks like Red Hat is dropping btrfs in favor 
of other filesystems:


(also in the light of last week's thread about BTRFS+compression) a very 
valid point.


BTRFS is in the kernel, so it's unlikely that you won't find a system to 
read your files from anytime soon. But obviously, there's a mixed bag of 
opinions about BTRFS - RedHat ends support in 2019, and Fedora makes it 
the default in 2020? Seriously? I'm at loss there.


Distro support is a serious thing to consider. In general, backuppc will 
happily work with whatever is the default file system of your 
distribution. For CentOS and RedHat, XFS is the obvious choice, and 
BTRFS will not give you any benefit except for compression, but 
potentially a wealth of trouble. You shouldn't need a whole lot of fancy 
features like snapshotting, copy-on-write, deduplication etc. on your 
pool anyways.



Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Cpool vs. filesystem level compression

2021-01-29 Thread Alexander Kobel

Hi again.

On 1/29/21 2:09 AM, backu...@kosowsky.org wrote:

Thanks Alexander -- REALLY helpful, REALLY thoughtful.
Comments below
Alexander Kobel wrote at about 18:04:54 +0100 on Thursday, January 28, 2021:

  > For initial backups and changes, it depends on your BackupPC server CPU.
  > The zlib compression in BackupPC is *way* more resource hungry than lzop
  > or zstd. You probably want to make sure that the network bandwidth is
  > the bottleneck rather than compressor throughput:
  >
  >gzip -c $somebigfile | pv > /dev/null
  >zstd -c $somebigfile | pv > /dev/null
  >lzop -c $somebigfile | pv > /dev/null
  >
  >
I get the following where I stored the file on a ram disk to minimize
the file read time effect...

1] Highly compressible 6GB text file
Compress:
gzip:  207MiB 0:00:47 [4.39MiB/s]
lzop:  355MiB 0:00:05 [70.2MiB/s]
zstd:  177MiB 0:00:07 [22.2MiB/s]

Uncompress:
gzip:  5.90GiB 0:00:21 [ 287MiB/s]
lzop:  5.90GiB 0:00:06 [ 946MiB/s]
zstd:  5.90GiB 0:00:04 [1.40GiB/s]

2] 1GB highly non-compressible file (created by /dev/urandom)
Compress:
gzip:  987MiB 0:00:31 [31.6MiB/s]
lzop:  987MiB 0:00:00 [1.24GiB/s]
zstd:  986MiB 0:00:01 [ 857MiB/s]

Note: I used the default compression levels for each.

So, focusing on the compressible file:
- gzip/zlib is slower than lzop and zstd and less compressible than
   zstd (but more than lzop)
- lzop is fastest but least compressible
- zstd is most compressible but slower than lzop, especially on
   compression
   
My concern with zstd though is that on compression, it is more than 3x

slower than lzo -- and is slower than even a standard hard disk,
meaning that it may be system performance limiting on writes.

Are your numbers similar?


Yes, they are, roughly. (On my laptop, didn't check on my server.)



However, I should have been a bit more careful here. What's actually 
important (to me) is whether my server can handle the *input* stream via 
network in time, in typical cases, so that the server side doesn't delay 
backup speed.


So the more interesting number is whether the server can consume, say, 1 
Gbit/s (or whatever your link speed is) in real-time. In your case, lzop 
and zstd deal with 1.2 GiB/s and 750 MiB/s *input* from the compressible 
file, whereas gzip does only 125 MiB/s. That's enough for Gigabit 
ethernet, theoretically, but just.



However, gzip is a severe bottleneck for the non-compressible data in 
your case.


And, of course, things get only worse if your CPU is also serving other 
tasks besides compression.




Either way, it seems that btrfs block level compression using either
lzo or zstd would be about an order of magnitude *faster* than
BackupPC compression using zlib. Right??


Correct. zlib is excessively slow compared to both, and the compression 
ratio does not make up for that. I quit gzip and bzip2 entirely and only 
use zstd, lz4/lzop or xz whenever I can. Of course, xz is abysmal in 
terms of speed, but it's still in the lead if every byte counts.




  > Unchanged files are essentially for free with both cpool and
  > pool+btrfs-comp for incrementals, but require decryption for full
  > backups except for rsync (as the hashes are always built over the
  > uncompressed content).

I assume you are referring to '--checksum'


Yes. IIUC, non-rsync-based transfers require 1:1-checking for full 
backups. (I have no such clients, thus I'm not sure.)



  > Same for nightlies, where integrity checks over
  > your pool data is done.

I don't believe that BackupPC_nightly does any integrity check of the
content, but rather just checks the integrity of the refcounts. As
such, I don't believe that it actually reads any files.


I was refering to $Conf{PoolNightlyDigestCheckPercent}, available since 
v4.4.0...



I did write my own perl script to check pool integrity in case anybody
is interested (it is about 100x faster than using a bash script to
iterate through the c-pool and pipe files to BackupPC_zcat followed by
md5sum)


... which has been superseded by the above and would again be 
superfluous with overall btrfs-scrub...



  > > 2. Storage efficiency, including:
  > >   - Raw compression efficiency of each file
  >
  > Cpool does file-level compression, btrfs does block-level compression.
  > The difference is measurable, but not huge (~ 1 to 2% compression ratio
  > in my experience for the same algorithm, i.e. zstd on block vs. file
  > level).

I assume compression is better on the whole file level, right?


Yes. As a rule, the better a file compresses,
the more overhead the block-level compression scheme is. To compare the 
effects, compare the output of `compsize noncompressed-file` with the 
compression ratio of the full file, or better `du compressed-file`.


For a chunk of kernel sources in a tarball, zstd -3 compresses to 16% 
file-level, but only to 21% on block-level.
 But I'd consider that pretty artificia

Re: [BackupPC-users] Cpool vs. filesystem level compression

2021-01-28 Thread Alexander Kobel

Hi,

On 1/27/21 10:58 PM, backu...@kosowsky.org wrote:

I know this question has been asked in the more distant past, but I
would like to get the latest views, as relevant to backuppc 4.x

I have my TopDir on a btrfs filesystem which has file-level
compression capabilities (using the mount option -o compress=lzo for
example).


I use the same, both as a daily driver on my machines and for my 
BackupPC pool. And I've been an early adopter of zstd instead of lzop, 
which I cannot praise highly enough.



I can do either:
1. Cpool with no btrfs compression
2. Pool with btrfs compression
3. Cpool plus btrfs compression (presumably no advantage)


Correct IMHO. The compressed cpool data will not compress any further. 
So I'll only comment on scenarios 1 and 2.


Throughout, I'll assume rsync transfers. Educated guess: the arguments 
hold for tar and rsyncd. For smb, no idea; decompression speed could be 
even more relevant.



I would like to understand the pros/cons of #1 vs. #2, considering
among other things:
1. Backup speed, including:
  - Initial backup of new files
  - Subsequent incremental/full backups of the same (unchanged) file
  - Subsequent incremental/full backups of the same changed file


For initial backups and changes, it depends on your BackupPC server CPU. 
The zlib compression in BackupPC is *way* more resource hungry than lzop 
or zstd. You probably want to make sure that the network bandwidth is 
the bottleneck rather than compressor throughput:


  gzip -c $somebigfile | pv > /dev/null
  zstd -c $somebigfile | pv > /dev/null
  lzop -c $somebigfile | pv > /dev/null


+/- multithreading, check for yourself.

Unchanged files are essentially for free with both cpool and 
pool+btrfs-comp for incrementals, but require decryption for full 
backups except for rsync (as the hashes are always built over the 
uncompressed content). Same for nightlies, where integrity checks over 
your pool data is done. Decryption is significantly faster, of course, 
but still vastly different between the three algorithms. For fast full 
backups, you might want to ensure that you can decrypt even several 
times faster than network throughput.



2. Storage efficiency, including:
  - Raw compression efficiency of each file


Cpool does file-level compression, btrfs does block-level compression. 
The difference is measurable, but not huge (~ 1 to 2% compression ratio 
in my experience for the same algorithm, i.e. zstd on block vs. file 
level). Btrfs also includes a logic to not even attempt further 
compression if a block looks like it's not going to compress well. In my 
experience, that's hardly ever an issue.


So, yes, using zlib at the same compression level, btrfs compresses 
slightly worse than BackupPC. But for btrfs there's also lzop and zstd.



  - Ability to take advantage of btrfs extent deduplication for 2
distinct files that share some or all of the same (uncompressed) content


Won't work with cpool compression.
For pool+btrfs-comp, it's hard to assess - depends on how your data 
changes. Effectively, this only helps with large files that are mostly 
identical, such as VM images. Block-level dedup is difficult, only 
available as offline dedup in btrfs, and you risk that all your backups 
are destroyed if the one copy of the common block in there gets 
corrupted. For me a no-go, but YMMV, in particular with a RAID-1.


File level deduplication is irrelevant, because BackupPC takes care of 
that by pooling.



3. Robustness in case of disk crashes, file corruption, file system
corruption, other types of "bit rot" etc.
(note my btrfs filesystem is in a btrfs-native Raid-1
configuration)


DISCLAIMER: These are instances for personal data of few people. I care 
about the data, but there are no lives or jobs at stake.



Solid in my experience. Make sure to perform regular scrubs and check 
that you get informed about problems.
On my backup system, I only ever saw problems once, when the HDD was 
about to die. No RAID to help, so this was fatal for a dozen files, 
which I had to recover from a second off-site BackupPC server.


On my laptops, I saw scrub errors five, six after power losses during 
heavy duty. That's less than one occasion per year, but still, it happened.


On a side note, theoretically you won't need nightly pool checks if you 
run btrfs scrub at the same rate.


With kernel 5.10 being a LTS release, we even have a stable kernel + 
fallback supporting xxhash/blake2/sha256 checksums, which is great at 
least from a theoretical perspective.




In case there *is* a defect, however, there's not a whole lot of 
recovery options on btrfs systems. I wasn't able to recover from any of 
the above scrub errors, I had to delete the affected files.




In the past, it seems like the tradeoffs were not always clear so
hoping the above outline will help flesh out the details...



Looking for both real-world experience as well as theoretical
observations :)


Re: [BackupPC-users] Imnproving backup speed

2021-01-09 Thread Alexander Kobel

Hi all,

On 1/7/21 8:56 PM, Michael Stowe wrote:

On 2021-01-07 00:39, Sorin Srbu wrote:

Hello all!

Trying to improve the backup speed with BPC and looked into setting 
noatime in fstab. [...]


What will BPC in particular do if noatime is set?


In short, it depends on your transport methods.  Rsync will be fine. 
Tar/smb?  Not so much for incrementals, but fulls (of course) will be fine.


On the *client* side that's a more interesting question. As mentioned 
before in this thread and as documented in the BackupPC docs, the 
*server* side is okay with noatime as far as the BackupPC pool is concerned.


For clients, I have no idea about smb, but a vague one about tar. 
Disclaimer: I don't use tar myself.


The GNU tar manpage is fairly explicit about atime issues, but the 
subject is distributed in different sections:
In 
https://www.gnu.org/software/tar/manual/html_section/tar_22.html#SEC42, 
it recommends --atime-preserve=system for incremental backups, if 
supported by the OS. This gently asks the system for not modifying 
atimes; essentially a noatime request for an individual read.
In https://www.gnu.org/software/tar/manual/html_section/tar_69.html 
however, the --atime-preserve=replace (which happens to be the default 
variant for --atime-preserve) is documented to *not* play nicely with 
incremental backups. This one resets the file stats after reading, but 
the reset itself counts as metadata change, and accordingly the file 
will be re-read on the next run. I'm not 100% sure where the timestamp 
of the change is recorded, though. Also, in my tests, I could not 
confirm that --atime-preserve=replace causes issues with incrementals; 
but that's based on rather ad-hoc manual tests, not via BackupPC.


In any case, tar recommends an entirely different approach for 
incremental dumps 
(https://www.gnu.org/software/tar/manual/html_section/tar_39.html#SEC96), but 
this is not feasible with BackupPC.


Also, note that --newer does not consider atime, but only ctime and 
mtime 
(https://www.gnu.org/software/tar/manual/html_section/tar_52.html#SEC116).


BackupPC by default uses --newer (aka --after-date) on incremental dumps 
(see $Conf{TarIncrArgs} around 
https://github.com/backuppc/backuppc/blob/master/conf/config.pl#L1136), 
and accordingly has a warning in the config about not using 
--atime-preserve[=replace]. However, it is questionable whether the same 
warning also applies for --atime-preserve=system (assuming it's 
supported on the client).


So from what I understand, the combination of --newer and 
--atime-preserve=system (or noatime mounts) should be almost optimal.
And --newer-mtime + noatime should work, too, but with the usual 
downside of --newer-mtime that metadata updates are not caught (e.g., 
permission changes).


Of course, that's assuming that there no other issues with using noatime 
unrelated to the backups.



P.S.: https://backuppc.github.io/backuppc/BackupPC.html#Backup-basics is 
not 100% accurate, claiming that incremental backups with tar rely on 
mtime only; actually, the use of --newer implies that mtime *and* ctime 
are relevant.



Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Imnproving backup speed

2021-01-07 Thread Alexander Kobel

Hi Sorin,

On 1/7/21 9:39 AM, Sorin Srbu wrote:

Hello all!

Trying to improve the backup speed with BPC and looked into setting noatime
in fstab.

But this article states some backup programs may bork if noatime is set.

https://lonesysadmin.net/2013/12/08/gain-30-linux-disk-performance-noatime-nodiratime-relatime/

What will BPC in particular do if noatime is set?


exactly what it's supposed to do. noatime or at least relatime (or 
perhaps recently lazytime) is the recommended setting:

https://backuppc.github.io/backuppc/BackupPC.html#Optimizations


Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Large rsyncTmp files

2020-05-01 Thread Alexander Kobel

Hi Marcelo,

On 5/1/20 4:15 PM, Marcelo Ricardo Leitner wrote:

Hi,

Is it expected for rsync-bpc to be writting such large temporary files?


If and only if there is such a big file to be backed up, AFAIK.


It seems they are as big as the full backup itself:
# ls -la */*/rsync*
-rw--- 1 112 122 302598406144 May  1 10:54 HOST/180/rsyncTmp.4971.0.29


Did you double-check whether there really is no file of that size on the 
HOST? (Try running `find $share -size +10M` on it, or something like 
that.)


Do you use the -x (or --one-file-system) option for rsync?
I recently ran into a similar issue because I didn't. A chrooted process 
suddenly received its own copy of /proc under 
/var/lib//proc after a system update, and proc has the 
128T-huge kcore. Not a good idea trying to back up that directory. 
(Running dhcpcd on Arch by any chance?)
It also got other mounts, like sysfs and some tmpfs, but those were 
mostly harmless.



That's a 300GB file, it filled the partition, and the full size for
this host is 337GB.

Thanks,
Marcelo



HTH,
Alex


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why ping before backup?

2019-08-10 Thread Alexander Kobel
On 09.08.19 07:08, Michael Huntley wrote:
> It’s simply to check if the host is answering.

Yup.

> I use ‘echo’

IIUC, you use a command that always returns true (coincidentally, the
canonical choice for that command is `true`... ;-))?

This means that
(1) blackouts won't work as expected (note that pings are done even when
no backup is due, to log whether this is a "always-on host"), and
(2) that no matter whether the server is reachable or not, for any
wakeup moment when a new backup is due, the host's backup tree will be
copied over to the new "working tree" and immediately discarded again if
the host couldn't be reached.

(2) is merely an issue with the server load; (1) actually changes the
"semantics" and could mean that backups are skipped because the host is
considered always-on, but isn't.

IMHO the only reason why one might want to replace the ping command is
hosts that don't reply to pings for security reasons, or hosts where the
hostname addressing is non-trivial (e.g., SSH via proxies specified in
~backuppc/.ssh/config). For both situations, and assuming SSH is used,
sshping is an excellent replacement for ping; see
https://github.com/spook/sshping and my very recent pull request
https://github.com/spook/sshping/pull/16, which was made with BackupPC
in mind... A pity that I found it only a couple of days ago myself...


Cheers,
Alex


>> On Aug 8, 2019, at 10:05 PM, Kenneth Porter  wrote:
>>
>> Why does BackupPC ping the host to be backed up before starting its backup. 
>> (I'm using rsyncd.) I'm thinking of replacing the ping command with "rsync 
>> $host::". Is there any downside to that?
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncIncrArgsExtra

2019-08-03 Thread Alexander Kobel
Hi,

On 03.08.19 18:59, Ted Toal wrote:
> Ged,
> 
>> BackupPC shines, I think, in less well-constrained situations.
>>
>> Given the boundaries I wonder if you wouldn't do better with something
>> simple like a script which runs 'find' to find the files to be backed
>> up, plain vanilla rsync to do the actual transfers, and de-duplication
>> provided (if necessary) by one of several filesystems which offer it.
> 
> We looked at a lot of different solutions, and BackupPC seemed best.  I 
> really like it.  I’m not sure that any script we set up could do any better 
> job finding the files to back up, than rsync via BackupPC with the 
> file-size-constraint option specified.  If I understand it correctly, 
> incrementals DO NOT read the entire file contents and compute a checksum, but 
> work strictly off of file modification date, so finding the files requires 
> only reading the directories and not reading the files themselves, right?

Correct.

FWIW, `find` with inspection of the modification date (-newer) calls
getdents64 via readdir for listing directory entries directly, then
lstat for each entry. `rsync` does exactly the same; so for the
unchanged files, both should be identical. (In other words, I don't
think that an additional mirroring script based on find buys you
anything over BackupPC's rsync use.)

What *might* be a problem: I remember the painful experience of listing
directories with more than a couple files via NFS. [1] explains a
possible reason: readdir is not exactly the most efficient way to get
such lists, in particular if the latency to get another chunk of the
directory listings is significant. But probably that won't matter if you
have to call lstat per file anyways.

  [1]:
http://be-n.com/spw/you-can-list-a-million-files-in-a-directory-but-not-with-ls.html


Alex


> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncIncrArgsExtra

2019-08-03 Thread Alexander Kobel
Hi Ted,

On 02.08.19 20:09, Ted Toal wrote:
> Hi Alex,
> 
> Ok, thanks for that suggestion, I’d thought of it, but wasn’t sure if rsync 
> would complain if the arg appeared twice, but apparently it doesn’t.
> 
> I am NOT sure whether bandwidth limitation is what I want.  I am actually 
> trying to throttle down not only the network bandwidth used but also the I/O 
> load.  This is a shared file system with hundreds of users accessing it.  I’m 
> only backing up our lab’s small portion of the data, and I’m only backing up 
> files less than 1 MB in size.  The full backups are done separately by 
> someone else in a different manner.  For my <1 MB files, I am doing a full 
> backup once a year and an incremental backup once an hour.

> I want to have essentially 0 impact on the network bandwidth and on the I/O 
> load between the server that talks to BackupPC and the network storage device.

I'm not 100% sure, but this sounds way more complicated than throttling
the bandwidth between the BackupPC server and the host.

IIUC, your situation is:

  BPC (1)  ---(a)---  host (2)  ---(b)---  NAS (3)

BPC (1) is the BackupPC server; host (2) is the system you want to back
up, i.e., the client from BackupPC's perspective; and NAS (3) is the
server providing the shared file system.

You want to limit I/O on 3 as well as bandwidth on link b, with
privileged access to only 1, no access to 3, and probably no chance of
changing the way 2 communicates with 3, correct? (E.g., to set up a
dedicated NFS connection where the server side (3) is I/O-limited.)


Here's my gut feeling: (disclaimer: unconfirmed, highly dependent on
your exact setup, and I'm not an expert on NFS setups)

In that situation, ionice on 2 won't help; the rsync instance running on
host 2 is purely cpu- and network-bound, but has negligible local I/O
(controlled by ionice). And limiting cpu (via nice) and network
bandwidth (via trickle, e.g.) on 2 won't help, either: just listing
files on an NFS is usually a bottleneck, because individual requests
have to pass the link b.
If you somehow manage to limit the bandwidth across b, actual *content*
transfer will be horribly slow. (And I expect this to be difficult, as
the NFS is probably pre-mounted via a mechanism that you can't control.)
The only reasonable idea, AFAICS, would be to rate-limit the *number* of
files accessed. But I do not see how this could be done, short of
modifying the rsync-sender on host 2.

IMHO, the one and only *proper* way to install such a backup solution
would be to ask your friendly staff managing the NAS 3 (hopefully
experts on how their setup works, if it serves 100+ users) to grant you
access to their backups (which they surely have), or give you read-only
direct access to NAS 3 with proper limits.
What you're trying to do sounds like their job, and even if you have
reasons to think that you might do better or have specific requirements
they won't be able to fulfill, you're not in best position to implement it.


Just my 2 pennies from someone who enjoys not having do deal with NFS a
lot...

Alex


> Since I’m just starting, I’m doing the first full backups, and they are 
> taking forever.  I have a bandwidth limit of 1 MB/s, very low.  I need to 
> explore how high I can go without impacting other’s access, and how high I 
> need to go to finish the full backups and incremental backups in a timely 
> fashion.  I’m thinking a higher bandwidth limit for the full backups would 
> get them done quicker with still little impact.  For the incrementals, I 
> haven’t done one yet so I don’t know how long it will take, but I may 
> discover I have to increase that bandwidth also, and/or decrease the 
> frequency of the incrementals.
> 
> Based on that, do you think I should be using ionice too?  And by the way, I 
> do not have root access to the server.
> 
> Ted



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncIncrArgsExtra

2019-08-02 Thread Alexander Kobel
Hi again,

On 02.08.19 11:50, Alexander Kobel wrote:
> Hi Ted,
> 
> On 01.08.19 18:31, Ted Toal wrote:
>> There is a BackupPC config parameter named RsyncFullArgsExtra, but none 
>> named RsyncIncrArgsExtra (to provide extra rsync args for an incremental 
>> backup).  I’d like to see such a parameter.  My immediate use is that I’d 
>> like to restrict rsync bandwidth to different amounts depending on whether 
>> it is a full or incremental backup.
> 
> [...]
> 
> Apart from that, are you sure that a bandwidth limit actually is what
> you're after? The (network) *bandwidth* used for incrementals and fulls
> does not differ a lot; it's the *I/O load* on the client that makes the
> real difference:

that being said: if you want to use ionice or similar tools to adjust
the I/O load on the client, I suggest that you set RsyncClientPath to a
simple wrapper script that calls rsync via ionice. In this script, just
check whether --checksum is in the argument list; if it is, you're
running a full backup, otherwise an incremental.


Again: before solving the problem, make sure that it actually exists.
;-) I wouldn't be surprised if you end up at the *same* ionice arguments
for fulls and incrementals...


Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncIncrArgsExtra

2019-08-02 Thread Alexander Kobel
Hi Ted,

On 01.08.19 18:31, Ted Toal wrote:
> There is a BackupPC config parameter named RsyncFullArgsExtra, but none named 
> RsyncIncrArgsExtra (to provide extra rsync args for an incremental backup).  
> I’d like to see such a parameter.  My immediate use is that I’d like to 
> restrict rsync bandwidth to different amounts depending on whether it is a 
> full or incremental backup.

assuming that you want to use --bwlimit, can't you just add

  --bwlimit=

in RsyncArgs, and an additional

  --bwlimit=

in RsyncFullArgsExtra? According to my tests, the second will override
the first for the full limits. Slightly unelegant workaround, but effective.

Note that the arguments are appended in the order

  RsyncArgs RsyncFullArgsExtra RsyncArgsExtra

(see lib/BackupPC/Xfer/Rsync.pm, lines 307-334 or so), so you have to
add the "default" (incremental) limit to RsyncArgs, not RsyncArgsExtra.


Apart from that, are you sure that a bandwidth limit actually is what
you're after? The (network) *bandwidth* used for incrementals and fulls
does not differ a lot; it's the *I/O load* on the client that makes the
real difference:

IIUC, no matter what backup type, rsync needs to compare all file paths
and some metadata. For incrementals, by default it compares
  path size modification-time;
for fulls (with --checksum), it skips the latter two and compares
  path checksum
instead.

For the computation of the checksums, the client will read each file in
its entirety. That makes for a lot of *I/O bandwidth* on the client. But
regarding the *network bandwidth*: The checksum is an MD5 hash, i.e.,
128 bit = 16 bytes long. Without digging in the source, size and modtime
are probably integers of 4 or 8 bytes each. So my guess is that the
bandwidth difference is *at most* 8 bytes per file, but more likely 0...


HTH,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Identical files

2019-08-02 Thread Alexander Kobel
Hi,

On 01.08.19 18:03, G.W. Haywood via BackupPC-users wrote:
> Hi there,
> 
> On Fri, 26 Jul 2019, Ted Toal wrote:
> 
>> Is it easy to make updates to the documentation?
> 
> I haven't seen a reply to this so I'll take a stab at it, although I
> don't really know the proper procedure.  Mr. Barratt will know.

no offense meant to Ted, but I dare to guess a reason for the silence:
folks simply think that the explanation in the documentation is
sufficiently precise. And, to be honest, I tend to agree (with a very
minor exception, see the last paragraph of this mail). In particular
since I cannot remember that particular question popping up in the last
couple of years I follow this list.

Anyway, not trying to keep you from proposing something to improve.

> It's not clear to me where you saw the "BackupPC documentation" which
> you mentioned in your OP.

I'd think it's the most official source I can imagine: the 4.3.1 docs,
directly linked from the home of http://backuppc.sourceforge.net/, at
https://backuppc.github.io/backuppc/BackupPC.html#Backup-basics

The questionable passage is:

BackupPC pools identical files. By "identical files" we mean files with
identical contents, not necessary the same permissions, ownership or
modification time. Two files might have different permissions,
ownership, or modification time but will still be pooled whenever the
contents are identical. This is possible since BackupPC stores the file
metadata (permissions, ownership, and modification time) separately from
the file contents.

I have no brilliant idea how one could spell out "identical contents"
more precise. But as a suggestion:

Would replacing "same permissions, ownership or modification time" by
"same filename, path, permissions, ownership, modification time, or
other metadata" make the crowd more happy?


IMHO, the only *serious* non-trivial case regarding the "sameness" of
files are alternate data streams on NTFS. A caveat regarding those might
be in order, if anyone already wants to touch the docs. (AFAIK, ADSs are
not part of the backup.)


Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-bpc claims that "file has vanished", but tries to download it over and over again

2019-07-30 Thread Alexander Kobel
Thanks Pierre-Yves for the hint, and Craig for confirming.

Just this minute, I was about to write that the maintainer of the Arch
Linux package (Sébastien Luttringer, amazingly swift!) already updated
the package with the new rsync-bpc and BackupPC::XS, and the issue is
gone with the new version.

Unfortunately, it's easy for him to miss updates on the auxiliary
packages, which are not distributed separately in the repo. I subscribed
to the Github release notifications and will ping him whenever something
changes.


Thank you all!
Alex


On 30.07.19 03:24, Craig Barratt via BackupPC-users wrote:
> Yes, that bug is fixed in the latest versions of BackupPC (4.3.1) and
> rsync_bpc (3.1.2.1).  It sounds like you have the latest BackupPC, but
> you will need to upgrade to the latest rsync_bpc.
> 
> Craig
> 
> On Mon, Jul 29, 2019 at 12:29 PM Alexander Kobel  <mailto:a-ko...@a-kobel.de>> wrote:
> 
> Hi,
> 
> On 29.07.19 18:30, Pierre-Yves Bonnetain-Nesterenko wrote:
> > On 29/07/2019 17:49, Alexander Kobel wrote:
> >> Any ideas about what could be the culprit?
> >
> > Looks like the « zombie files » bug which was corrected by last update
> > of BPC.
> 
> huh. My google-fu fails me here... Would that be "remove any extraneous
> BPC_FTYPE_DELETED file types in non-merged backup" from the recent
> rsync-bpc releases?
> 
> (And that's when I notice that both rsync-bpc and BackupPC-XS from the
> Arch package are not up to date...)
> 
> 
> Thanks,
> Alex
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> <mailto:BackupPC-users@lists.sourceforge.net>
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-bpc claims that "file has vanished", but tries to download it over and over again

2019-07-29 Thread Alexander Kobel
Hi,

On 29.07.19 18:30, Pierre-Yves Bonnetain-Nesterenko wrote:
> On 29/07/2019 17:49, Alexander Kobel wrote:
>> Any ideas about what could be the culprit?
> 
> Looks like the « zombie files » bug which was corrected by last update
> of BPC.

huh. My google-fu fails me here... Would that be "remove any extraneous
BPC_FTYPE_DELETED file types in non-merged backup" from the recent
rsync-bpc releases?

(And that's when I notice that both rsync-bpc and BackupPC-XS from the
Arch package are not up to date...)


Thanks,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync-bpc claims that "file has vanished", but tries to download it over and over again

2019-07-29 Thread Alexander Kobel
Dear all,

I just noticed that one of two virtually identical BackupPC instances,
both backing up the same host (my email server), reports for a couple
weeks that some files vanished - and they are. So why does it insist on
trying to download it over and over again?


In the error log of server 1 (the problematic one), I get:

file has vanished:
"/users/akobel-a-kobel/.BackupPC/cur/1538092391.M656164P24400VFC01I01A60348_0.lupus.uberspace.de,S=6345:2,STa"

And, indeed, this file's gone since fairly long (the timestamp from the
filename indicating that it used to be there in Sep. 2018, and I trust
Sourceforge to keep archives of the backuppc-users list...).

I couldn't find the file in any previous backup, so probably it's been
stored in some intermediate backup that has been deleted since. Still,
it throws an error in each and every backup run, full or incremental.

The log of server 2 stays silent about this file.


Furthermore, and perhaps even more surprising, server 1 reliably complains:

file has vanished: "/users/akobel-a-kobel/.BackupPC/dovecot.index.log.2"

But *this* file didn't vanish at all - it's reliably there, whenever I
check. It's properly backed up by server 2, but exists in *none* of the
backups of server 1. (And, yes, I can read and access it - only one user
account on this host, and that's the one who created this file.)


Both BackupPC servers are running version 4.3.1-1 from the Arch Linux
repo; that is, BackupPC 4.3.1, BackupPC-XS 0.58, and rsync-bpc 3.1.2.0.
Both servers access the host via rsync, same host account. Identical
host config file and, AFAICS, server config, apart from the TOPDIR and
hostname. Only difference, as far as I can see, is that server 1
(problematic) has an uncompressed pool, server 2 a compressed one.

Several other hosts are backed up by both servers without any hickups.
Fsck'ing both gave zero complaints.


Any ideas about what could be the culprit?


Thanks,
Alex




smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Identical files

2019-07-25 Thread Alexander Kobel
On 24.07.19 23:07, Ted Toal wrote:
> BackupPC documentation says ‘by "identical files" we mean files with 
> identical contents, not necessary the same permissions, ownership or 
> modification time.’  What about filename and file directory, do these have to 
> match for a pair of files to be identical?

No. And the on-disk representation may be different, too - inodes,
filesystems, sparsity, (transparent) compression and encryption, name
it; the files might even be on different machines, that's quite
expected... ;-)

From a practical perspective, files x and y are identical if and only if
`diff x y` returns 0. It's really just about the content.


HTH,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC Fuse filesystem [was: Only Getting empty directory structure]

2019-04-11 Thread Alexander Kobel

Hi,

concerning the Fuse interface:

On 11.04.19 01:38, Adam Goryachev wrote:
Perhaps someone will update the fuse plugin to work with BPC4, which 
could make this a lot easier.


That someone goes by "Craig": the one we all know and love and without 
whom this project wouldn't fly.


https://sourceforge.net/p/backuppc/mailman/message/35899426/

backuppcfs.pl.gz:

https://sourceforge.net/p/backuppc/mailman/attachment/CADSzEFhwZKY6%2BZAkiPxf1SMo0_LAbeEJzPX-6Tane%3DBhuDHkeQ%40mail.gmail.com/1/


It's not 100% perfect, but close to - Craig mentions broken up 
hardlinks, I experienced some set-up trouble concerning file ownership 
(cause not all client users exist on the server) and permissions. Read 
the fine manual for mount options for Fuse filesystems on how to allow 
arbitrary users to search the Fuse paths; you'll mount as backuppc user, 
but by default even root cannot traverse the Fuse paths. For good 
reasons, but in this case it's annoying. Also, it's slow.


But those are issues that are fundamental to Fuse, AFAICS, and do not 
tell anything about BackupPCFS' quality. At least it's very convenient 
to occasionally browse backups.



Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Find file given digest (and a decompression error)

2018-11-27 Thread Alexander Kobel

Hi,

On 27.11.18 13:40, Guillermo Rozas wrote:

Pigz doesn't correctly support BackupPC compressed files, although
it will in some cases.  The reported error is likely the problem. 
Please use BackupPC_zcat instead and report back.



Of course, you're right :) (although pigz failed only in 2 files out of 
several thousands).


oh well, I was wondering about that. I've yet to see such a file (and 
probably never will, because I disabled pool compression for good and 
now use btrfs' lzop filesystem-based compression), but...


BackupPC_zcat decompresses both files correctly and their checksums are 
correct now. However, at least with one of the files there is something 
fishy going on because the compressed version is 60KB, the decompressed 
is 7GB!


... from what I understand, the main difference between "standard" 
gzip/pigz and BackupPC's variant is that the latter adds additional 
"sync" operations when a (e.g. sparse) file compresses way better than 
usual. In that case, BackupPC's zipping mechanism ensures that 
decompression only requires a fixed amount of memory, at the expense 
that extremely compressible data does not compress to 0.01%, but 
only to 0.001% or something. (I'm lacking the details, sorry.)


I'd bet that those two files are extremely sparse.
There are good reasons for such a file to be generated: e.g., from a 
ddrescue run that skipped lots of bad areas on a drive, or a VM disk 
image with a recently formatted partition, or similar. On many modern 
file systems supporting sparse files, the overhead for the holes in the 
file is negligible, so it's easier from a user perspective to allocate 
the "full" file and rely on the filesystem's abilities to optimize 
storage and access.
However, some of BackupPC's transfer methods (in particular, rsync) 
cannot treat sparse files natively, but since they compress so well, 
that's hardly an issue for transfer nor storage on the server.



The reason why I recommended pigz (unfortunately without an appropriate 
disclaimer) is that it

- never failed on me, for the files I had around at that time, and
- it was *magnitudes* faster than BackupPC_zcat.

But I had a severely CPU-limited machine; YMMV with a more powerful CPU.
Depending on your use case (and performance experience), it might still 
be clever to run pigz first and only run BackupPC_zcat if there is a 
mismatch. If a pigz-decompressed file matches the expected hash, I'd bet 
at approximately 1 : 2^64 that no corruption happened.



Which brings me to:

I added a guide to the Wiki to find out where a pool file is
referenced

.


That's great, thanks!


Indeed, thanks!

I'll check those 2 files tonight, and hopefully 
have a script working by the weekend.


Cool! If you don't mind and are allowed to, please share here...


Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-30 Thread Alexander Kobel

Hi,

On 2018-07-28 20:04, Guillermo Rozas wrote:
Agreed, that is my situation. I'm reasonably sure of the system(UPS, 
Debian stable, ext4), but as my backups are relatively small (1can trade some extra hours of backup once in a while for the extra peace 
of mind.


IIUC, you want a way to check the integrity of the pool files on the 
server side.
BackupPC 3 used to have such a function, by re-checksumming and 
verifying some percentage of the pool during a nightly (can't remember 
the details, and I don't have the v3 docs available).


If you want to do this for yourself, it's pretty easy with a cronjob. 
Just compare, for all files in $topDir/pool/*/*/, their md5sum with the 
filename. Same = good, not the same = bad.
If your pool is compressed, pipe the compressed files in 
$topDir/cpool/*/*/ through pigz [1] (which, as opposed to gzip, can 
handle the headerless gz format used there), as in the following piece 
of bash:


  digest=$(pigz -dc $file | md5sum -b | cut -d' ' -f1)

Now, check if $digest == $file, and you have a sanity check. (It's 
slightly more annoying to find out where $file was referenced in case it 
is corrupted; but it's possible, and I recommend not to worry about that 
until it happens.)


Of course, you can easily scrub only a part of your pool, just choose 
how many subdirectories you want to process each night.



  1: https://zlib.net/pigz/


HTH,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] $host nice -n 19 sudo

2018-07-10 Thread Alexander Kobel

Hi,

On 07/10/2018 01:22 PM, Kbtest Testar wrote:

Hi...

I have installed latest version of Backuppc  4.2.1. and want to set  
$host nice -n 19 as i used in version 3 but can't find where to set it 
in version 4.2.1 any hints that can lead me to where to put this 
configuration is greatly appriciated.


if you use rsync, ${RsyncClientPath} in the Xfer section should do the job:

https://backuppc.github.io/backuppc/BackupPC.html#Rsync-Rsyncd-Configuration

(e.g., set it to '/usr/bin/nice -n 19 /usr/bin/rsync').

I assume that the same holds for TarClientPath and SmbClientPath; with 
Rsyncd and FTP, the BackupPC client runs a server daemon already which 
should be started with nice.



HTH,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh from an account without a shell

2018-02-07 Thread Alexander Kobel

Hi Phil,

are you sure that the correct SSH key is used for the connection?

You mentioned that the command works as root, so I assume that the SSH 
key that you authorized for connecting to the client is root's rather 
than backuppc's.  (Crucial line: "Permission denied 
(publickey,keyboard-interactive).")
If so, either ssh-genkey as backuppc and authorize the corresponding 
pubkey on the client, or copy over root's key to backuppc; typically, 
this should work as root via

cp -i ~root/.ssh/id_?sa* ~backuppc/.ssh/
chown backuppc:backuppc ~/backuppc/.ssh/id_?sa*

Another reason could be that, unless you disabled SSH host key checking, 
you need to manually SSH to the client as user backuppc once to confirm 
the client's key at the prompt.  To check this,

su -s /bin/bash backuppc
ssh ${client_user}@xxx.xxx.xxx.xxx

Note the ${client_user} - I guess that would be root, so it works 
without any specification when you run SSH as root, but not as backuppc. 
 You might want to add a corresponding UserName entry in your backuppc 
user's SSH config file.



HTH,
Alex


On 02/07/2018 04:10 PM, Philip Parsons (Velindre - Medical Physics) wrote:

Hi Robert,

Thanks for that.

I’ve just tried it and (I should have already guessed this!), it gave 
the same error message as BackupPC reported J


Any ideas what I may be doing wrong?

Thanks,

Phil

*From:*Robert Trevellyan [mailto:robert.trevell...@gmail.com]
*Sent:* 07 February 2018 14:33
*To:* General list for user discussion, questions and support 


*Subject:* Re: [BackupPC-users] ssh from an account without a shell

To run a shell as the backuppc user, do something like this:
su -s /bin/bash backuppc


Robert Trevellyan

On Wed, Feb 7, 2018 at 8:22 AM, Philip Parsons (Velindre - Medical 
Physics) > wrote:


Hi all,

I’ve tried looking in a couple of places, but I can’t find the
answer to this.

I have essentially used this webpage to install and configure
backuppc 4.1.5 on Ubuntu 16.04 LTS -

https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS

I then tried to create an rsync backup to a SUSE server using
https://www.howtoforge.com/linux_backuppc_p4  and various other sources.

When I try to run the backup it fails with ‘No files dumped for
share /data/rtgrid/’

When I run the line from the XferLOG:

/usr/local/bin/rsync_bpc --bpc-top-dir /data/backuppc
--bpc-host-name rtgridserver --bpc-share-name /data/rtgrid/
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
--bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new
--bpc-log-level 1 -e /usr/bin/ssh\ -l\ root
--rsync-path=/usr/bin/sudo\ /usr/bin/rsync --super --recursive
--protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --delete-excluded --one-file-system
--partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats
--checksum --timeout=72000 --exclude=var/experiment
--exclude=junkdrawer --exclude=scratchpad --exclude=tmp
xxx.xxx.xxx.xxx:/data/rtgrid/

As root, it runs.

However, I get the error message (tagged at the bottom for
readability) when backuppc runs the job.  I’m probably making things
more confusing than they should be, but I think the install webpage
creates a shell-less service backuppc user.  I then can’t su to
backuppc to run the line to test it.

Has anyone else used this kind of account on backuppc?  Can anyone
suggest a fix for this?

Thanks in advance,

Philip Parsons

Contents of file /data/backuppc/pc/rtgridserver/XferLOG.bad.z,
modified 2018-02-07 06:00:06

XferLOG file /data/backuppc/pc/rtgridserver/XferLOG.0.z created
2018-02-07 06:00:00

Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0,
newBkupNum = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx = 
(FillCycle = 0, noFillCnt = )


Running: /usr/local/bin/rsync_bpc --bpc-top-dir /data/backuppc
--bpc-host-name rtgridserver --bpc-share-name /data/rtgrid/
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
--bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new
--bpc-log-level 1 -e /usr/bin/ssh\ -l\ root
--rsync-path=/usr/bin/sudo\ /usr/bin/rsync --super --recursive
--protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --delete-excluded --one-file-system
--partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats
--checksum --timeout=72000 --exclude=var/experiment
--exclude=junkdrawer --exclude=scratchpad --exclude=tmp
xxx.xxx.xxx.xxx:/data/rtgrid/ /

full backup started for directory /data/rtgrid/

Xfer PIDs are now 111286

This is the rsync child about to exec /usr/local/bin/rsync_bpc

Permission denied 

Re: [BackupPC-users] 4.1.5 Error: Wrong user: my userid is 33, instead of 126(backuppc)

2017-12-19 Thread Alexander Kobel

On 12/19/2017 05:07 PM, JC Francois wrote:

Hi,

I have been successfully running backuppc on an archlinux server for a
couple of months. Since the last update of the arch package I am no
longer able to access the web GUI. [...]


Hi,

are you sure that the backuppc package is the culprit, rather than an 
apache update or some other, seemingly unrelated, system package that 
might have changed the setuid behavior?


IIUC, in your setup, the *interpreter* needs to be run as user backuppc, 
that is /usr/sbin/perl; did you have the setuid bit set on that file 
before the update?



I read the comment in the documentation about setuid emulation not being
supported on many systems but it worked until the last backuppc update
and perl has not been updated since september.


Probably you've seen those before - but just in case you didn't: Do the 
hints from the Arch wiki help?



https://wiki.archlinux.org/index.php/BackupPC#The_webserver_user_and_the_suid_problem

I'm not using this setup, but I realize that one difference is 
user:group backuppc:http instead of backuppc:backuppc for 
BackupPC_Admin.  Also, the description there suggests that if your 
approach worked before, it was a bug rather than a feature...



Best,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bug in rsync_bpc --sparse?

2017-12-15 Thread Alexander Kobel

Hi Craig,

thanks for your swift reply.

On 2017-12-15 05:17, Craig Barratt via BackupPC-users wrote:
Unfortunately sparse files are not supported by rsync_bpc, and there are 
no plans to do so.


Okay. Not a big impact for BackupPC's files, anyway - I just thought 
it's safe and no harm, but proven wrong...



I should make it a fatal error if that option is specified.


Yes, that would be great to avoid future mistakes.

I believe a full backup (without --sparse of course) should update the 
files to their correct state.


Okay. For me to understand: the MD5 hashes are generated on the server 
side, correct? So a file that was transferred incorrectly will not be 
stored under the hash of the original file? And the full backup does not 
just skip based on size, times and names, but on the actual hash of the 
file? In that situation I see why running a full backup should resolve 
everything.



May I ask you to crank out a short comment on point d) as well? If it's 
complicated, don't. But I found earlier questions on how to decompress 
an entire pool on the mailing list to employ ZFS' or Btrfs' compression, 
and while it's officially unsupported to convert the pool, I might try 
if (and only if) my assumptions are correct on what would need to be done.

d) On my *actual* server, I used compression. This incident taught
me to verify some of the files manually, and to perhaps migrate to
filesystem compression (which I had planned anyway) to keep things
as simple as possible.
   d.1) BackupPC_zcat for verifying/decompressing has a remarkable
overhead for a well-grown set of small files (even when pointed
directly to the pool files). From what I can tell, Adler's pigz [2]
implementation supports headerless zlib files and is *way* faster.
Also, all my tests show that files compress to output with the
expected hashes encoded in the filename. However, I remembered that
BackupPC's compression flushes in between, apparently much like
pigz. Are BackupPC's compressed files *fully* in default zlib
format, or do I need to expect trouble with large files in corner cases?
   d.2) Conceptually, what is needed to convert an entire v4 pool to
uncompressed storage? Is it just
    - decompression of all files from cpool/??/?? to pool/??/??
(identical names, because hashes are computed on the decompressed data)
    - move poolCnt files from cpool/?? to pool/??
    - replace compression level in pc/$host/backups and
pc/$host/nnn/backupInfo
   or do any refCnt files need to be touched as well?



Thanks a lot,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Share exclude lists among different servers

2017-11-29 Thread Alexander Kobel

Dear all,

is there a good/clean/recommended way of sharing exclude lists between 
servers by decoupling them from the main config.pl file?  I have a 
couple of servers whose configs differ just a bit, but enough that 
synchronizing changes becomes slightly annoying.  Also, I'm sure that 
many people have put quite a bit of effort to write sensible default 
exclude lists, and sharing/distributing such lists would be easier if 
they are decoupled from the rest of the configuration.


On a related note (perhaps more appropriate for -devel): 
$Conf{RsyncArgsExtra} pretty much allows me to do what I need:


$Conf{RsyncArgsExtra} = [
'--exclude-from=$confDir/pc/$host.exclude',
];

Unfortunately, there is no variable substitution for $shareName, unlike 
in the tar and SMB client commands.  Is there a specific reason why it 
is not allowed for RsyncArgs?



Thanks,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Mark backup for premenent retention

2017-11-28 Thread Alexander Kobel

On 11/28/2017 05:46 PM, Nick Bright wrote:

On 11/28/2017 10:38 AM, Nick Bright wrote:
Is there a way to mark a backup point (in this case, it's an 
incremental) so that the backup (and all backups it depends on) are 
preeminently retained?


e.g. for a server that's failed or been decommissioned?

I may have already done so by disabling backups with 
$Conf{BackupsDisable} = 0;


"Disable all full and incremental backups. These settings are useful for 
a client that is no longer being backed up (eg: a retired machine), but 
you wish to keep the last backups available for browsing or restoring to 
other machines."


Sounds like that does what I'm looking for, and I just skimmed over the 
part about "keep the last backups available". I'm interpreting this as 
it'll retain all existing backups. Could anybody confirm?


Confirmed, except that 0 means "not disabled".

AFAIU, the host still participates in cleanup according to the usual 
settings (FullKeepCnt, FullKeepCntMin, IncrKeepCnt, IncrKeepCntMin), but 
since you certainly didn't set all of those to 0, you will be golden.



Cheers,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up Windows 7 Pro, not enough permissions

2017-05-23 Thread Alexander Kobel
Hi Michael Huntley, Michael Stowe, and everybody,

I overlooked your reply and only now saw it in the archives - also, I 
don't have your private mail address visible there, to reach out for a 
"thanks" privately:

> Stowe has been around forever.  I suggest you read his entire blog -
> very entertaining. 
> 
> Here's his BackupPC stuff: 
> 
> http://www.michaelstowe.com/backuppc/ 

Thanks a lot for the pointer; testing soon!


Cheers,
Alexander



On 2017-05-17 09:07 AM, Alexander Kobel wrote:
> Hi Michael,
> 
> On 2017-05-15 06:32 PM, Holger Parplies wrote:
>> Hi,
>>
>> Michael Stowe wrote on 2017-05-15 09:58:08 -0500 [Re: [BackupPC-users] 
>> Backing up Windows 7 Pro, not enough permissions]:
>> [...]
>>> At any rate, these reasons are why I personally switched to a
>>> combination of rsync and vshadow (to handle open files) and put together
>>> a package to install the proper files on the client side.  [...]
> 
> would you mind to share that package, in whatever state it is? I use a 
> similar setup (Cygwin + rsync over SSH), but getting it to run is less 
> than intuitive.
> 
> I yet have to find a simple and fool-proof way of installing (don't talk 
> about keeping up-to-date even) that combination. It takes me less and 
> less time every time I do it (every few months), but it's still a pain. 
> IIRC, last time I had to refresh my mind on
> - how to enable a cyg_server (appx. a.k.a. root) account with the proper 
> access rights
> - how to open port 22 on the firewall, and (if need be) respond to pings,
> - how to make the SSH server accept the BackupPC server pubkey (because 
> cyg_server has no home by default).
> And that's even without vshadow.
> 
> Certainly not an approach that I can load off to the average Joe user, 
> say, "download and install ..., and I do the BackupPC server setup 
> remotely".
> 
> 
> Cheers,
> Alexander

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up Windows 7 Pro, not enough permissions

2017-05-17 Thread Alexander Kobel
Hi Michael,

On 2017-05-15 06:32 PM, Holger Parplies wrote:
> Hi,
>
> Michael Stowe wrote on 2017-05-15 09:58:08 -0500 [Re: [BackupPC-users] 
> Backing up Windows 7 Pro, not enough permissions]:
>[...]
>> At any rate, these reasons are why I personally switched to a
>> combination of rsync and vshadow (to handle open files) and put together
>> a package to install the proper files on the client side.  [...]

would you mind to share that package, in whatever state it is? I use a 
similar setup (Cygwin + rsync over SSH), but getting it to run is less 
than intuitive.

I yet have to find a simple and fool-proof way of installing (don't talk 
about keeping up-to-date even) that combination. It takes me less and 
less time every time I do it (every few months), but it's still a pain. 
IIRC, last time I had to refresh my mind on
- how to enable a cyg_server (appx. a.k.a. root) account with the proper 
access rights
- how to open port 22 on the firewall, and (if need be) respond to pings,
- how to make the SSH server accept the BackupPC server pubkey (because 
cyg_server has no home by default).
And that's even without vshadow.

Certainly not an approach that I can load off to the average Joe user, 
say, "download and install ..., and I do the BackupPC server setup 
remotely".


Cheers,
Alexander

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] server maintenance: reconstruct missing poolCnt; find/delete references to missing pool files

2017-03-07 Thread Alexander Kobel
Hi Craig,

and thanks a lot for your swift reply.  Comforting to see that both messages 
are known and harmless.

On 2017-03-07 18:24, Craig Barratt wrote:
>>> BackupPC_refCountUpdate: doing fsck on  #1188 since there are no 
>>> poolCnt files
>>> BackupPC_refCountUpdate: doing fsck on  #1190 since there are no 
>>> poolCnt files
>>> ...
>>> BackupPC_refCountUpdate: host  got 0 errors (took 5 secs)
>>
>> The backups in question seem to be fully intact; [...]
>> 
> This is perfectly ok.  BackupPC 4.0.0alpha3 and prior 4.x versions
> didn't store reference counts per backup.  [...]  So BackupPC_refCountUpdate 
> is
> simply adding reference counts to backups done by BackupPC
> 4.0.0alpha3.  It's a one-time thing.

Got that, and...

> There might be an issue that an incremental done by BackupPC
> 4.0.0alpha3 with no changes will have an empty backup tree, and
> BackupPC_refCntUpdate will continually report that there are no
> poolCnt files for that backup.  That's benign.  [...]

... indeed that's the case here.  It's backups of a data directory on a server 
that rarely changes.

> In 4.0.0, BackupPC_dump flags that by creating a file
> "HOST/NNN/refCnt/noPoolCntOk, which makes BackupPC_refCntUpdate
> quietly ignore that backup.  Perhaps I should have
> BackupPC_refCntUpdate notice that legacy case and create the
> noPoolCntOk file...

Certainly low priority, but you might keep it on the list if it's not a lot of 
work.
On the other hand, now everyone who searches for the warning will find this 
mailing list post, and be pleased to hear that a simple
  touch //refCnt/noPoolCntOk
gets rid of the warning.  (Sanity check before: confirm that // did 
not contain anything but backupInfo and an empty refCnt/ directory.)

>> BackupPC_refCountUpdate: missing pool file  
>> count 30
>> BackupPC_refCountUpdate: missing pool file 0601e1b90a7f92ce4cffa588ef2cc9da 
>> count 1
>> ...
>> BackupPC_refCountUpdate: missing pool file ea1bd7ab2e00 
>> count 1
> 
> This is a bug in rsync-bpc [...] (yes, I had "<" instead of "<="... doh!).

Oh, well...  He that has never wrote that error, let him first cast a stone... 
;-)

> Future backups with 4.0.0 (assuming the same file exists on the
> client) will be updated with the correct digest, but the old backups
> will still have the wrong one.  The errors will go away when the
> corresponding backups eventually expire.

Okay.  Just important to know that it will "fix itself" rather than getting 
worse over time.


Thanks again for your reply, and thanks for such a great overall program!


Alexander


> On Tue, Mar 7, 2017 at 6:56 AM, Alexander Kobel <a-ko...@a-kobel.de
> <mailto:a-ko...@a-kobel.de>> wrote:
> 
> Dear all,
> 
> I have a rather small, private, non-$$$M-mission-critical instance of
> a BackupPC server running for years (that went from several
> 3.something through 4.0.0alpha3 to, recently, 4.0.0). After the 4.0.0
> migration, I decided to run again some semi-manual maintenance (read:
> fsck and refCountUpdate). Yet, after an (admitted, more or less
> random) sequence of BackupPC_fsck BackupPC_fsck -f BackupPC_fsck -f
> -s BackupPC_refCountUpdate -m BackupPC_refCountUpdate -m -F -c 
> BackupPC_refCountUpdate -m -F -c -s BackupPC_fixupBackupSummary 
> BackupPC_nightly 0 255 I'm still stuck with the following messages:
> 
>> BackupPC_refCountUpdate: doing fsck on  #1188 since there are
>> no poolCnt files BackupPC_refCountUpdate: doing fsck on 
>> #1190 since there are no poolCnt files ... BackupPC_refCountUpdate:
>> host  got 0 errors (took 5 secs)
> 
> The backups in question seem to be fully intact; some are full
> backups, some are incremental. It's just on a minority of backups
> (appx. 15 out of 350 backups), and fortunately on small ones where
> fsck does not take ages, so it does not bother me too much.
> Nevertheless, can the missing poolCnt data be recomputed? fsck seems
> to do the counting from scratch; can this be stored?
> 
>> BackupPC_fsck: building main count database 
>> BackupPC_refCountUpdate: missing pool file
>>  count 30 BackupPC_refCountUpdate:
>> missing pool file 0601e1b90a7f92ce4cffa588ef2cc9da count 1 ... 
>> BackupPC_refCountUpdate: missing pool file
>> ea1bd7ab2e00 count 1 ... 
>> BackupPC_refCountUpdate total errors: 70 BackupPC_fsck: Calling
>> poolCountUpdate
> 
> IIUC, this means that there are reference to files with that hash
> *somewhere* in the backups, but the respective files are missing from
> the cpool. Since the number is very low, I'm not really w