Re: [BackupPC-users] rsync via ssh/rrsync

2024-05-23 Thread Mike Hughes
I think you're having some confusion about what that clause in authorized_keys 
does. According to [1], it basically executes the command for you. So when the 
backuppc user creates the ssh connection, it runs:
'/usr/bin/rrsync /'
on the target. You don't need to "initiate" the rsync in this way because 
BackupPC manages that, once you select rsync as the method. All you need to do 
is permit rsync to run, which is not accomplished in the authorized_keys file.
Remove that restriction.

Additionally, if you are connecting as the backuppc user, what provides that 
user permission to rsync the files? In my case, I followed this tip from the 
manual [2] and added an entry in /etc/sudoers (edited with visudo) to allow 
backuppc to run rsync without a password prompt:

## Allow backuppc user to use rsync
backuppc  ALL=NOPASSWD: /usr/bin/rsync --server *

Hope this helps!

[1]https://serverfault.com/a/803873/125737
[2] 
$Conf{RsyncClientPath}
 = 'sudo /usr/bin/rsync';


From: Gandalf Corvotempesta 
Sent: Thursday, May 23, 2024 3:31 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] rsync via ssh/rrsync

This is the command in authorized_keys:

command="/usr/bin/rrsync /",restrict,from=

So it should allow rsync to access to the full server, as expected,
but BPC doesn't show any files in the backup

Il giorno gio 23 mag 2024 alle ore 10:24 Gandalf Corvotempesta
 ha scritto:
>
> Il giorno gio 23 mag 2024 alle ore 09:16 Christian Völker via
> BackupPC-users  ha scritto:
> >
> > Well, I guess you'll need to make sure ssh works fine.
> > To do so, go to your backuppc server an switch into the user context of
> > backuppc by "su - backuppc". From there issue "ssh user@hostname" and
> > accept the public key of the target client.
> > Once done, it should run without any problems. As long as you have rsync
> > installed on your target and it is configured to be found in the default
> > path.
>
> Yes, it was an ssh-key issue, backuppc is running from backuppc users, but 
> i've
> transferred the root ssh key, not the backuppc key. Now this is fixed
> and the backup
> is made as expected, in the transfer log I can see the transferred files
>
> BUT there is a huge issue: when browsing the backup just finished,
> backuppc is saying that is empty
> and no files are shown.
>
> Any clou?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] PingCmd no longer having effect

2024-04-05 Thread Mike Hughes
Hi Ian,

I'm not sure why it would suddenly stop working for you as that seems like a 
completely legitimate solution.
FWIW, this is the command I use for unpingable clients:
$Conf{PingCmd} = '/bin/echo $host';

Best of luck!

From: Ian via BackupPC-users 
Sent: Friday, April 5, 2024 1:27 PM
To: backuppc-users@lists.sourceforge.net 
Cc: Ian 
Subject: [BackupPC-users] PingCmd no longer having effect

Hi,

I've been using BackupPC for many years now, and have used /usr/bin/true as the 
ping command for hosts that can't respond to ping.  However after a reboot on 
January 31st, my server stopped obeying pingcmd and stopped backing up those 
hosts due to lack of ping response.  I am using BackupPC 4.4.0-5ubuntu2 amd64 
on Ubuntu 22.04.

So far I have tried:

Overriding $Conf{PingCmd}, $Conf{PingPath}, and $Conf{Ping6Path} in the global 
config from webui and config file.

Setting  $Conf{PingCmd} in the client config to either /usr/bin/true or ping 
-c1 127.0.0.1 using webui and config file.

None of the above seem to make any difference at all anymore.  No matter what I 
set, I restart the service and get the same error:

2024-04-05 13:58:38 can't ping  (client = myclient.com); exiting


However I can confirm other changes to the client config files do have an 
effect.  For example setting a ClientNameAlias in the client config works.

I also tried deleting the client config file and allowing the webui to create 
it, and doing an apt-get --reinstall install backuppc.  Same effect.

I'm wondering if anyone has noticed anything similar, or has advice on how to 
debug.

Thanks,
Ian
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to run a manual backup.

2024-03-04 Thread Mike Hughes
Hi Tony,

If you're stuck at the ssh part, you won't be successful running a backup as 
making the connection is the first step.
What happens when you try to ssh to this client from the backuppc account on 
the backup server? Assume you've already tried ssh-copy-id but it's failing. 
I'm having trouble imagining what part of an upgrade would have broken a 
functional ssh key pairing unless the IP address changed.

As for a local backup you could use rsync to copy home to another partition:

rsync -a dir1/ dir2

https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories

From: Tony Molloy 
Sent: Sunday, March 3, 2024 9:42 AM
To: backuppc-users@lists.sourceforge.net 
; Tony Molloy 
Subject: [BackupPC-users] How to run a manual backup.

Is it possible to run a backup of a share from the command line. I've checked 
the manual and can't seem to find it.

I've several CentOS-Stream-8 boxes backing up without  problems. I upgraded one 
box to CentOS-Stream-9 and I'm having trouble configuring sshd to get rsync for 
backuppc working. I'd just like to do a full backup of the home directories 
until I get sshd working.

Thanks,
Tony.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Lost linux file ownership on restore

2022-11-17 Thread Mike Hughes
This is one I found but have not tried:
https://gist.github.com/phoenix741/99a5076569b01ba5a116cec24a798d5f
It mentions being updated for updated for 4.x in 2017 which is when 4.0 was 
released.

From: backu...@kosowsky.org 
Sent: Thursday, November 17, 2022 8:44 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] Lost linux file ownership on restore

Paul Fox wrote at about 15:52:07 -0500 on Wednesday, November 16, 2022:
 > backu...@kosowsky.org wrote:
 >  > 'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
 >  > the contents/ownership/perms/dates/Xattrs etc. of any file in your
 >  > backup.
 >  >
 >  > It is great for troubleshooting as well as for partial restores...
 >
 > Are you referring to the version that Craig attached to this list
 > message in June 2017?  Or is there a later version?
 >
 > (Not that I don't trust Craig to have gotten the v4 support right the
 > first time.  :-)
 >
 >   https://sourceforge.net/p/backuppc/mailman/message/35899426/
 >
 > paul
 > =--
 > paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 41.7 degrees)
 >

I believe a later version was posted and is indeed necessary for v4
since:
- The notion of how incrementals are constructed from fulls has been
  inverted
- The format of the attrib files has changed extensively
- The pool hierarchy, naming convention, and storage format have
  changed
- Xattrs are now supported

Look through the archives... it was posted since v4 somewhere...




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] HELP - FW: BackupPC administrative attention needed

2022-03-24 Thread Mike Hughes
Hi Edward,

I'm not sure how much help this list can be as we are just a group of people 
who administrate our own installations of BackupPC. It's a very flexible 
solution that can handle massive amounts of backup and runs maintenance-free 
for years, so it's not a surprise if it got forgotten about until the disk 
filled up.

Most of the folks on this list installed the software ourselves. The "PC Backup 
Genie" is just a cutesy name attributed to the process that sent the email. Can 
you reach out to the individual responsible for the installation of the system? 
 If it's not possible to contact the person in charge of the BackupPC build, 
the best clue to find the filled disk is to identify the originating server of 
any alerts such as the one below to lead you to the source.

As the Senior IT Admin you're familiar with the spoofability of the "FROM" 
address on an email. There's no guarantee that it came from the "remote05" 
server, but check that first as a simple first step. If that server isn't 
running BackupPC then contact your email administrator to ask for a mail trace 
to determine which server sent it.


From: Edward Cotter 
Sent: Thursday, March 24, 2022 2:19 AM
To: backuppc-users@lists.sourceforge.net 
; 
backuppc-users-requ...@lists.sourceforge.net 

Cc: Mindas Berzonskis ; Charles Chang 
; Scott Armit ; Chaminda 
Delpagodage 
Subject: [BackupPC-users] HELP - FW: BackupPC administrative attention needed


Hello backuppc support,



New user to your mailing list and completely new to the product.



As Senior IT Admin – tasked with resolving issues with our existing backup 
solution through your software.



These alerts are coming out everyday for our company – looking for basic 
guidance and direction with suggested steps to resolve to avoid any data loss 
for our organization.



Please advise at your earliest convenience.



Working to discover the file system location /srv/backuppc to see what space is 
available and if it can be expanded.



Thanks,



From: backu...@remote05.aliaswire.com 
mailto:backu...@remote05.aliaswire.com>>
Sent: Sunday, March 20, 2022 6:00 PM
To: it-admin mailto:it-ad...@aliaswire.com>>
Subject: BackupPC administrative attention needed

Yesterday 156 hosts were skipped because the file system containing 
/srv/backuppc/ was too full.  The threshold in the configuration file is 95%, 
while yesterday the file system was up to 97% full.  Please find more space on 
the file system, or reduce the number of full or incremental backups that we 
keep.

Regards,
PC Backup Genie





Edward R. Cotter  |  Senior IT Administrator



152 Middlesex Turnpike | Burlington MA 01803

Office: 617-393-5388

Mobile: 781-572-9764



ecot...@aliaswire.com

aliaswire.com





[Red text on a black background  Description automatically generated with 
medium confidence]



[Logo  Description automatically generated]  [A picture containing text, 
clipart  Description automatically generated]



From: Chaminda Delpagodage 
Sent: Monday, March 21, 2022 10:46 AM
To: Edward Cotter ; Mindas Berzonskis 

Cc: Scott Goldthwaite ; Charles Chang 
; Timothy M. Spear 
Subject: Re: BackupPC administrative attention needed



Thanks Ed for following up on this! This space issue needs to be addressed asap 
as backuppc is our current backup solution until a new tool is rolled out.



Thanks



--

Chaminda Delpagodage | Senior Director of SRE

Aliaswire



From: Edward Cotter mailto:ecot...@aliaswire.com>>
Date: Monday, March 21, 2022 at 10:06 AM
To: Mindas Berzonskis 
mailto:mberzons...@aliaswire.com>>
Cc: Chaminda Delpagodage 
mailto:chami...@aliaswire.com>>, Scott Goldthwaite 
mailto:sgoldthwa...@aliaswire.com>>, Charles Chang 
mailto:cch...@aliaswire.com>>, Timothy M. Spear 
mailto:tmsp...@aliaswire.com>>
Subject: FW: BackupPC administrative attention needed

Mindas,

When we move to our new backup solution.

Will it take care of this issue we have with awcamdsk through backup genie?

Thanks,


Edward R. Cotter  |  Senior IT Administrator

152 Middlesex Turnpike | Burlington MA 01803
Office: 617-393-5388
Mobile: 781-572-9764

ecot...@aliaswire.com
aliaswire.com






-Original Message-
From: backu...@remote05.aliaswire.com 
mailto:backu...@remote05.aliaswire.com>>
Sent: Sunday, March 20, 2022 6:00 PM
To: it-admin mailto:it-ad...@aliaswire.com>>
Subject: BackupPC administrative attention needed

Yesterday 156 hosts were skipped because the file system containing 
/srv/backuppc/ was too full.  The threshold in the configuration file is 95%, 
while yesterday the file system was up to 97% full.  Please find more space on 
the file system, or reduce the number of full or incremental backups that we 
keep.

Regards,
PC Backup Genie
___
BackupPC-users 

Re: [BackupPC-users] (no subject)

2022-03-10 Thread Mike Hughes
You'll need to prepend your command with the following:
$sshPath -q -x -l backuppc $host curl...
or
$sshPath -q -x -l root $host curl...

Click on the hyperlink DumpPreUserCmd for more details and examples, such as:

'$sshPath -q -x -l root $host /usr/bin/dumpMysql';


From: Евгений 
Sent: Thursday, March 10, 2022 8:03 AM
To: General list for user discussion, questions and support 

Subject: [BackupPC-users] (no subject)

Hello.
I'm trying to do backup with DumpPreUserCmd.
This command is like: curl -s -u login:password -o 
/home/backuppc/mydir/myfile.zip http://$host/path/to/api
and then xfer method tar from /home/backuppc/mydir
curl download data, but output file is zero size. this end with error of zero 
data backuped.
with curl -v I can see, that downloading goes without errors and conpletes with 
real size of file.
I have no idea how to find reason to such error. Any help?

I use bpc in docker. version - 4.4.0

also attached xfer log
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupFilesExclude doesn't exclude a directory from backups

2022-02-24 Thread Mike Hughes
Hi Chiel,

Since you're targeting /var, the directory to exclude would be /log, not 
/var/log. Be sure to add it directly under the BackupFilesExclude config for 
the /var share.

From: chiel 
Sent: Thursday, February 24, 2022 7:16 AM
To: backuppc-users@lists.sourceforge.net 
Subject: [BackupPC-users] BackupFilesExclude doesn't exclude a directory from 
backups

Hello,

I'm using BackupPC 4.4.0 and used the web gui to backup some
directories. This created the following config.

$Conf{RsyncShareName} = [
   '/etc/',
   '/usr/',
   '/var/'
];

I try to exclude the log directory using the following config. However
this doesn't seem to work.

$Conf{BackupFilesExclude} = {
   '*' => [
 '/var/log'
   ]
};

The log directory is still included in the backups.

Is the above config correct? I'm using rsync. All machines are ubuntu.

Chiel


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] "mutt" and "/etc/aliases"

2021-09-29 Thread Mike Hughes
I don't know exactly what kind of reporting you're looking for, but I've been 
using this for years:

https://github.com/moisseev/BackupPC_report

I have an ansible script that pulls the code from above, appends a .pl 
extension to the executable, then puts it into /usr/share/BackupPC/bin/.

Then I create a cronjob for weekly reports and another daily cron to notify me 
of errors ('BackupPC_report.pl -s')


From: orsomann...@gmail.com 
Sent: Wednesday, September 29, 2021 10:51 AM
To: G.W. Haywood via BackupPC-users 
Subject: Re: [BackupPC-users] "mutt" and "/etc/aliases"

> It seems to me that this is a mail client question, not a BackupPC
> question.

I know and ... honestly: I posted here in the hope that someone could
suggest an alternative way to send notifications from BackupPC, because
I went insane trying to send attachments with sendamil provided by
Postfix ...


> Perhaps you will need to create a script which takes your /etc/aliases
> and outputs text suitable for mutt

Thank you so much for your answer!



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] maintaining exclusion templates

2021-05-27 Thread Mike Hughes
Hi, Ghislain! That's a lot fancier than I was going for, but I can certainly 
use what you've shared to solve my question.
Thank you for responding!

From: Ghislain Adnet 
Sent: Thursday, May 27, 2021 7:41 AM
To: Mike Hughes 
Cc: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] maintaining exclusion templates

Le 20/05/2021 à 21:18, Mike Hughes a écrit :
> Hi BackupPC users,
>
> I'd like to maintain several templates for exclusion lists to be available 
> across all clients. For example, the LINUX_WWW template might exclude these 
> directories:
> /proc
> /sys
> /home/this_user
> /mnt/this_network_share
> /tmp


>
hi,

  i use things like this that i include in each host file:



 while ( my ($key,$value) = each(%{$Conf{BackupFilesExclude}}) ) {
 push (@{$value}, '**/lost+found/**' );
 push (@{$value}, '**/.svn/**' );

 #SPAMASSASSIN
 push (@{$value}, '**/.spamassassin/bayes*' );
 push (@{$value}, '**/.spamassassin/auto*' );
 # symfony 3
 push (@{$value}, '**/app/cache/**' );
 push (@{$value}, '**/app/logs/**' );
 # symfony 4
 push (@{$value}, '**/var/cache/**' );
 push (@{$value}, '**/var/logs/**' );

 # cms mystere, on a retrouve ceci utilise
 push (@{$value}, '**/www/cache/**' );

 if ( "$key" eq "/etc" ){
 push (@{$value}, '/etc/webmin/mailboxes' );
 }

 }


i include it in my server file with



do "/etc/backuppc/include/baseexclude.pl";

--
cordialement,
Ghislain
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] maintaining exclusion templates

2021-05-20 Thread Mike Hughes
Hi BackupPC users,

I'd like to maintain several templates for exclusion lists to be available 
across all clients. For example, the LINUX_WWW template might exclude these 
directories:
/proc
/sys
/home/this_user
/mnt/this_network_share
/tmp
etc.

And the LINUX_SQL template might exclude:
/proc
/sys
/var/lib/mysql

The goal would be to apply the templates across multiple clients while not 
interfering with updates. Any perl pros out there already doing something 
similar?

I did have a 'my @common_excludes' variable created in each client's config, 
but this gets messy, especially when the configs are edited via the GUI.

Thanks for any tips!
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Handling machines too large to back themselves up

2021-04-08 Thread Mike Hughes
Hi Dave,

You can always break a backup job into multiple backup 'hosts' by using the 
ClientNameAlias setting. I create hosts based on the share or folder for each 
job, then use the ClientNameAlias to point them to the same host.


From: Dave Sherohman 
Sent: Thursday, April 8, 2021 8:22 AM
To: General list for user discussion, questions and support 

Subject: [BackupPC-users] Handling machines too large to back themselves up


I have a server which I'm not able to back up because, apparently, it's just 
too big.

If you remember me asking about synology's weird rsync a couple weeks ago, it's 
that machine again.  We finally solved the rsync issues by ditching the 
synology rync entirely and installing one built from standard rsync source code 
and using that instead.  Using that, we were able to get one "full" backup, but 
it missed a bunch of files because we forgot to use sudo when we did it.  (The 
synology rsync is set up to run suid root and is hardcoded to not allow root to 
run it, so we had to take sudo out for that, then forgot to add it back in when 
we switched to standard rsync.)

Since then, every attempted backup has failed, either full or incremental, 
because the synology is running out of memory:

This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
Xfer PIDs are now 1228998,1229014
xferPids 1228998,1229014
ERROR: out of memory in receive_sums [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(118) 
[sender=3.2.0dev]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 32863617 inode
rsync_bpc: [generator] write error: Broken pipe (32)



The poor little NAS has only 6G of RAM vs. 9.4 TB of files (configured as two 
sharenames, /volume1 (8.5T) and /volume2 (885G) and doesn't seem up to the task 
of updating that much at once via rsync.

Adding insult to injury, even a failed attempt to back it up causes the bpc 
server to take 45 minutes to copy the directory structure from the previous 
backup before it even attempts to connect, and then 12-14 hours doing reference 
counts after it finishes backing up nothing.  Which makes trial-and-error 
painfully slow, since we can only try one thing, at most, each day.

In our last attempt, I tried flipping the order of the RsyncShareNames to do 
/volume2 first, thinking it might successfully back up the smaller share 
successfully before running out of memory trying to process the larger one.  It 
did not run out of memory... but it did sit there for a full 24 hours with one 
CPU (out of four) running pegged at 99% handling the rsync process before we 
finally put it out of its misery.  The bpc xferlog recorded that the connection 
was closed unexpectedly (which is fair, since we killed the other end) after 
3182 bytes were received, so the client clearly hadn't started sending data 
yet.  And now, after that attempt, the bpc server still lists the status as 
"refCnt #2" another 24 hours after the client-side rsync was killed.

So, aside from adding RAM, is there anything else we can do to try to work 
around this?  Would it be possible to break this one backup down into smaller 
chunks that are still recognized as a single host (so they run in sequence and 
don't get scheduled concurrently), but don't require the client to diff large 
amounts of data in one go, and maybe also speed up the reference counting a bit?

An "optimization" (or at least an option) to completely skip the reference 
count updates after a backup fails with zero files received (and, therefore, no 
new/changed references to worry about) might also not be a bad idea.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] DumpPreUserCmd didnt run anything

2020-08-22 Thread Mike Hughes
Hi Taste (nice domain name  ),

Are you including the $sshPath prefix as shown in the example?

 $Conf{DumpPreUserCmd} = '$sshPath -q -x -l root $host /usr/bin/dumpMysql';

From: Taste-Of-IT 
Sent: Friday, August 21, 2020 5:34 PM
To: backuppc-users@lists.sourceforge.net 
Subject: [BackupPC-users] DumpPreUserCmd didnt run anything

Hello,

i tried since days to use the DumpPreUserCmd Option. I tried last simply 
'/bin/bash -c /bin/touch /tmp/test.txt'; and '/bin/touch /tmp/test.txt';, but 
got in xferlog every time " executing DumpPreUserCmd  failed"

if i run the same under backuppc user in bash, it works. Ok BackupPC uses CGI 
and not classic shell, but why didnt it work? I use Version 4.3.2 under Debian 
10. Backup works well.

I dont know where to look or what to do, need help.

thx

--
Taste


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-29 Thread Mike Hughes
For what it's worth, I was able to resolve this (for now) by updating CPAN 
itself. I noticed when running certain commands that it complained that CPAN 
was at version 1.x and version 2.28 was available. It suggested running:
install CPAN
reload cpan

But those are clearly not bash commands. They need to be run in the CPAN shell:
# perl -MCPAN -e shell

After completing, I again ran the following as the backuppc user and it 
reported the correct version:
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.62

This is likely not the best way to resolve a mismatched cpan module version but 
it does appear to have worked for me, for now. I promise not to complain next 
time an update comes through and I end up having to rebuild from .iso 

From: Craig Barratt via BackupPC-users 
Sent: Thursday, June 25, 2020 5:43 PM
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

You can install the perl module Module::Path to find the path for a module.

After installing, do this:
perl -e 'use Module::Path "module_path"; 
print(module_path("BackupPC::XS")."\n");'

Example output:
/usr/local/lib/x86_64-linux-gnu/perl/5.26.1/BackupPC/XS.pm

Now try as root and the BackupPC user to see the difference.  Does the BackupPC 
user have permission to access the version root uses?

You can also print the module search path with:
perl -e 'print join("\n", @INC),"\n"'

Does that differ between root and the BackupPC user?

Craig

On Thu, Jun 25, 2020 at 9:48 AM Les Mikesell 
mailto:lesmikes...@gmail.com>> wrote:
> The system got itself into this state from a standard yum update.

That's why you want to stick to all packaged modules whenever
possible.   Over time, dependencies can change and the packaged
versions will update together.  You can probably update a cpan module
to the correct version manually but you need to track all the version
dependencies yourself.   There are some different approaches to
removing modules: https://www.perlmonks.org/?node_id=1134981


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Mike Hughes
The system got itself into this state from a standard yum update. I only 
intervened once the BackupPC service failed to start after a reboot.
>From what I found it looked like updating to .62 was the right direction. And 
>now I learned that there is no way to cleanly uninstall a cpan module. Ugh.
So am I looking at a purge and a reinstall? If so, is there a guide on how to 
do that?
Thanks for any tips!

From: Richard Shaw 
Sent: Thursday, June 25, 2020 10:08 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Thu, Jun 25, 2020 at 10:02 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How can I 
fix this while waiting for the patch to roll out?

Well, I'm not sure how to clean up the mess, but the problem is simple. You 
don't want to mix manual cpan installs with packages. There's no reason to use 
cpan at all if you're using my packages.

My guess is the cpan installs are going into /usr/local which is overriding the 
package installs.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

2020-06-25 Thread Mike Hughes
Daily report:
https://github.com/moisseev/BackupPC_report


From: backu...@kosowsky.org 
Sent: Thursday, June 25, 2020 8:42 AM
To: General list for user discussion 
Subject: [BackupPC-users] FEATURE REQUEST: More robust error reporting/emailing

It would be great if there could be a way to have a host/share
configurable way to trigger emails based on certain types of errors or
changes.

The goal being to avoid the "complacency" of backups continuing to run
but not being aware of either continuing backup errors or unexpected
changes to the underlying system.

Otherwise, one must rely on regular and pretty detailed review of logs
and stats.

Helpful configurable options would include:
- *Days* since last successful backup - *per host* configurable - as you
  may want to be more paranoid about certain hosts versus others while
  others you may not care if it gets backed up regularly and you want
  to avoid the "nag" emails

- *Number* of errors in last backup - *per host/per share*
  configurable - Idea being that some hosts may naturally have more
  errors due to locked files or fleeting files while other shares may
  be rock stable. (Potentially, one could even trigger on types of errors
  or you could exclude certain types of errors from the count)

- *Percent* of files changed/added/deleted in last backup relative to
  prior backup - *per host/per share* configurable - idea here being
  that you want to be alerted if something unexpected has changed on
  the host which could even be dramatic if a share has been damaged or
  deleted or not mounted etc.

Just a thought starter... I'm sure others may have other ideas to add...



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Mike Hughes
Certainly a mismatch. Here's my output. Hopefully it formats cleanly. How can I 
fix this while waiting for the patch to roll out?

# head /usr/share/BackupPC/bin/BackupPC -n1
#!/usr/bin/perl
# grep "use\ lib" /usr/share/BackupPC/bin/BackupPC
use lib "/usr/share/BackupPC/lib";
# which cpan
/bin/cpan
# /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57
# cpan install BackupPC::XS
...
# /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.62
# su backuppc -
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57
$ cpan install BackupPC::XS
...
ERROR: Can't create '/root/perl5/lib/perl5/x86_64-linux-thread-multi/BackupPC'
$ /usr/bin/perl -e 'use BackupPC::XS; print("$BackupPC::XS::VERSION\n")'
0.57

From: Richard Shaw 
Sent: Wednesday, June 24, 2020 12:42 PM
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Wed, Jun 24, 2020 at 12:19 PM Craig Barratt via BackupPC-users 
mailto:backuppc-users@lists.sourceforge.net>>
 wrote:
Mike,

It's possible you have two different versions of perl installed, or for some 
reason the BackupPC user is seeing an old version of BackupPC::XS.

Try some of the suggestions here: 
https://github.com/backuppc/backuppc/issues/351.

Yes, I just did a fresh install and didn't have any issues (with BackupPC-XS). 
I DID find that /var/run/BackupPC is not created by the package and cannot be 
created automatically since BackupPC is run as the backuppc user. Looking into 
that now.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Mike Hughes
I'm getting a service startup failure claiming my version of BackupPC-XS isn't 
up-to-snuff but it appears to meet the requirements:

BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s

# rpm -qa | grep -i backuppc
BackupPC-XS-0.62-1.el7.x86_64
BackupPC-4.4.0-1.el7.x86_64


From: Richard Shaw 
Sent: Tuesday, June 23, 2020 10:47 AM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

On Tue, Jun 23, 2020 at 8:24 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Thanks so much Richard! Will COPR installations auto-update via yum repository 
updates or do we need to specifically run a COPR update manually?

Yes, as long as you install the repo file it will work just like any other 
repository.

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-23 Thread Mike Hughes
Thanks so much Richard! Will COPR installations auto-update via yum repository 
updates or do we need to specifically run a COPR update manually?

From: Richard Shaw 
Sent: Monday, June 22, 2020 7:01 PM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

Builds complete and updates submitted for Fedora and CentOS 8

https://bodhi.fedoraproject.org/updates/?packages=BackupPC

CentOS 7 builds available via COPR:

https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/

Thanks,
Richard
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] replication of data pool

2020-05-20 Thread Mike Hughes
Hi, we're currently syncing our cpool to an off-site location on a weekly 
basis. Would it be feasible to only sync the latest of each backup rather than 
the entire pool?

To elaborate, on Saturdays we run an rsync of the entire cpool to another 
server to provide disaster recovery options. Is it possible/reasonable to just 
copy the data from the night before? Or, with de-duplication and compression, 
would we really save much space/transfer time? If so, what is the best way to 
grab just one night's worth of backups while still preserving a full recovery?

Just curious if someone is already doing this and how you sorted it out.

Thanks!
Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force pool cleanup

2019-12-19 Thread Mike Hughes
Hi Gandolf,

This is what I use to clean up disk space:
nohup /usr/share/BackupPC/bin/BackupPC_nightly 0 255 &

If I want to watch it work I'll use this:
tail nohup.out -F

Usually finishes in 5-10 minutes.


--

Mike

On Thu, 2019-12-19 at 16:08 +0100, Gandalf Corvotempesta wrote:

Hi to all.

Any command to run manually to force deletion of "expired" files from

pool to free up disk space?


I'm running the nightly on a very very low schedule , but right now I

have to run it to clean up as much as possible


Any hint ?



___

BackupPC-users mailing list



BackupPC-users@lists.sourceforge.net


List:



https://lists.sourceforge.net/lists/listinfo/backuppc-users


Wiki:



http://backuppc.wiki.sourceforge.net


Project:



http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd logging stopped after updating to 4.3.1

2019-09-30 Thread Mike Hughes
Yeah, this is indeed still a problem for me. My DumpPreUserCmd needs to
preserve the logs. The problem is that if/when that cmd fails, the exit
status is checked and it registers a failure. Thus, the main backup is
aborted. With no main backup to process, there is no xfer log file
saved and the output of DumpPreUserCmd is irrecoverable. This makes it
very hard to troubleshoot a problem with no log output.

@CraigBarratt, Please consider reversing/revising this update!

On Wed, 2019-08-28 at 08:33 -0500, Mike Hughes wrote:
> On Thu, 2019-08-15 at 14:54 -0500, Mike Hughes wrote:
> > ...It looks like a change was made which stops these logs from
> > being
> > written to the BackupPC logfile:
> > https://github.com/backuppc/backuppc/issues/285
> > 
> > I guess my question is, was this intentional? Do I need to manage
> > my process logfile outside of BackupPC?
> > 
-- 
Mike 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] home directory empty

2019-09-30 Thread Mike Hughes
This behavior has not changed over the years. You've found that the
/home folder exists on a separate partition than /. What is the value
of RsyncShareName? If it's just the default of "/", then that's the
only partition which will be examined by rsync and no other partitions
will be considered. This is because of the default flag --one-file-
system. Again, it's up to you if you want to target the /home partition
directly (by adding it to the RsyncShareName list), or capture all
filesystems by excluding that default rsync flag.

The question about which overrides what in the include/exclude lists
doesn't come into play unless you're actually scanning the /home partit
ion. See the docs for specifics regarding that once you get your
partition to scan.

Hope this helps!


On Mon, 2019-09-30 at 06:44 -0500, Bob Wooden wrote:
> I have been using BackupPC since early v3.0 days. Switched to v4 a
> few years ago.
> 
> I have always "exclude" directories to NOT backup. It has been my
> understanding that BackupPC users were to either "include" or
> "exclude" NOT both?
> 
> The "/" is on /dev/md1 and "/home" is on /dev/md2. Both on Linux
> (Ubuntu 18.04LTS) mdadm arrays.
> 
> Am I wrong? Doesn't "include" override any "exclude" settings?
> 
> 
> 
> On 9/29/19 9:02 AM, Mike Hughes wrote:
> > No, that is the default setting in BPC. So if your /home is on a
> > separate partition you either need to remove that setting, or add
> > the /home partition as a backup Target in addition to /.
> > Whichever is your best option is up to you.
> > 
> > On Sep 29, 2019 06:27, Bob Wooden  wrote:
> > Thanks, Michael.
> > 
> > Sorry, not clear if I am to run "rsync --one-file-system" as root
> > from 
> > command line?
> > 
> > The "--one-file-system" is listed in 'RsyncArgs'?
> > 
> > 
> > On 9/28/19 10:50 AM, Michael Stowe wrote:
> > > rsync --one-file-system
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:
> > https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> > 
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:
> > https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
-- 
Mike

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] home directory empty

2019-09-30 Thread Mike Hughes
Sorry I was grumpy this morning. Had a rough weekend. Appreciate your
thoughtful input.

On Mon, 2019-09-30 at 12:35 +0100, Jamie Burchell wrote:
> Since Bob hadn’t come back to confirm whether or not your answer
> worked for him, I was simply suggesting what is working for me in
> case it helps.
> 
> Kind regards,
> 
> Jamie
> --
> From: Mike Hughes [mailto:m...@visionary.com] 
> Sent: 30 September 2019 11:54
> To: General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> Subject: Re: [BackupPC-users] home directory empty
>  
> The answer has already been provided. If you're still unclear maybe
> try googling 'rsync one-file-system' or running lsblk on an affected
> system.
>  
> On Sep 30, 2019 03:51, Jamie Burchell  wrote:
> > I too am using CentOS 7 and that repo. The only thing I can think
> > is that defaults on CentOS 7 at least are that home directories are
> > owned by their respective user and nobody else can access them.
> > BackupPC would need to run as a privileged user for it to be able
> > to access those directories. I’d expect to see errors in the log
> > though if this were the issue.
> > 
> >  
> > 
> > I have:
> > 
> >  
> > 
> > $Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';
> > 
> >  
> > 
> > $Conf{RsyncArgs} = [
> > 
> >   '--super',
> > 
> >   '--recursive',
> > 
> >   '--protect-args',
> > 
> >   '--numeric-ids',
> > 
> >   '--perms',
> > 
> >   '--owner',
> > 
> >   '--group',
> > 
> >   '-D',
> > 
> >   '--times',
> > 
> >   '--links',
> > 
> >   '--hard-links',
> > 
> >   '--delete',
> > 
> >   '--delete-excluded',
> > 
> >   '--one-file-system',
> > 
> >   '--partial',
> > 
> >   '--log-format=log: %o %i %B %8U,%8G %9l %f%L',
> > 
> >   '--stats',
> > 
> >   '--acls',
> > 
> >   '--xattrs'
> > 
> > ];
> > 
> > 
> > On the client machines in /etc/sudoers:
> > 
> >  
> > 
> > backuppc ALL=NOPASSWD: /usr/bin/rsync --server *
> > 
> >  
> > 
> > Kind regards,
> > 
> > Jamie
> > 
> > --
> > 
> > From: Mike Hughes [mailto:m...@visionary.com] 
> > Sent: 29 September 2019 15:02
> > To: General list for user discussion, questions and support <
> > backuppc-users@lists.sourceforge.net>
> > Subject: Re: [BackupPC-users] home directory empty
> > 
> >  
> > 
> > No, that is the default setting in BPC. So if your /home is on a
> > separate partition you either need to remove that setting, or add
> > the /home partition as a backup Target in addition to /.
> > 
> > Whichever is your best option is up to you.
> > 
> >  
> > 
> > On Sep 29, 2019 06:27, Bob Wooden  wrote:
> > 
> > Thanks, Michael.
> > 
> > Sorry, not clear if I am to run "rsync --one-file-system" as root
> > from 
> > command line?
> > 
> > The "--one-file-system" is listed in 'RsyncArgs'?
> > 
> > 
> > On 9/28/19 10:50 AM, Michael Stowe wrote:
> > > rsync --one-file-system
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:
> > https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> > 
> 
>  
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
-- 
Mike

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] home directory empty

2019-09-30 Thread Mike Hughes
The answer has already been provided. If you're still unclear maybe try 
googling 'rsync one-file-system' or running lsblk on an affected system.

On Sep 30, 2019 03:51, Jamie Burchell  wrote:

I too am using CentOS 7 and that repo. The only thing I can think is that 
defaults on CentOS 7 at least are that home directories are owned by their 
respective user and nobody else can access them. BackupPC would need to run as 
a privileged user for it to be able to access those directories. I’d expect to 
see errors in the log though if this were the issue.



I have:



$Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';



$Conf{RsyncArgs} = [

  '--super',

  '--recursive',

  '--protect-args',

  '--numeric-ids',

  '--perms',

  '--owner',

  '--group',

  '-D',

  '--times',

  '--links',

  '--hard-links',

  '--delete',

  '--delete-excluded',

  '--one-file-system',

  '--partial',

  '--log-format=log: %o %i %B %8U,%8G %9l %f%L',

  '--stats',

  '--acls',

  '--xattrs'

];

On the client machines in /etc/sudoers:



backuppc ALL=NOPASSWD: /usr/bin/rsync --server *



Kind regards,

Jamie


--

From: Mike Hughes [mailto:m...@visionary.com<mailto:m...@visionary.com>]
Sent: 29 September 2019 15:02
To: General list for user discussion, questions and support 
mailto:backuppc-users@lists.sourceforge.net>>
Subject: Re: [BackupPC-users] home directory empty



No, that is the default setting in BPC. So if your /home is on a separate 
partition you either need to remove that setting, or add the /home partition as 
a backup Target in addition to /.

Whichever is your best option is up to you.



On Sep 29, 2019 06:27, Bob Wooden 
mailto:b...@donelsontrophy.com>> wrote:

Thanks, Michael.

Sorry, not clear if I am to run "rsync --one-file-system" as root from
command line?

The "--one-file-system" is listed in 'RsyncArgs'?


On 9/28/19 10:50 AM, Michael Stowe wrote:
> rsync --one-file-system


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] home directory empty

2019-09-29 Thread Mike Hughes
No, that is the default setting in BPC. So if your /home is on a separate 
partition you either need to remove that setting, or add the /home partition as 
a backup Target in addition to /.
Whichever is your best option is up to you.

On Sep 29, 2019 06:27, Bob Wooden  wrote:
Thanks, Michael.

Sorry, not clear if I am to run "rsync --one-file-system" as root from
command line?

The "--one-file-system" is listed in 'RsyncArgs'?


On 9/28/19 10:50 AM, Michael Stowe wrote:
> rsync --one-file-system


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC don't rsync /home folder

2019-09-19 Thread Mike Hughes
On Thu, 2019-09-19 at 15:20 +0100, Cogumelos Maravilha wrote:
> There's only a problem, the /home folder in some server aren't not 
> getting rsynced. 
...
>  --one-file-system

Have you verified that the systems which are not working as expected
don't have separate partitions for /home?

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] new client won't back up

2019-09-18 Thread Mike Hughes
Hi BackupPC users,

I added a new host and it is encountering thousands of errors and won't
finish a backup. Instead it is dumping errors such as: 
file has vanished: "/etc/iscsi/initiatorname.iscsi"
rsync_bpc: fstat ... No such file or directory (2)
rsync_bpc: stat ... No such file or directory (2)
rsync_bpc: symlink ... failed: File exists (17)

More from the log is available here:
https://pastebin.com/rgeezxfi

This occurred once before with a different host and I don't believe I
ever solved it. I'm thinking of deleting the host from the backup
server and starting over unless anyone has a better solution.

Thanks!
Mike 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump stalls with SIGTTOU on Windows client using rsync

2019-09-06 Thread Mike Hughes
This seems like a dumb problem to have but I'm struggling to figure out how to 
tell BackupPC to ssh into the cygwin client as a local user.

The backuppc account was created locally on the machine to be backed up, thus 
it does not exist in AD. I am able to log in using other AD accounts with no 
problem using the format:
ssh username@my-host

I can connect to this local account if I prepend the hostname in uppercase with 
a plus(+) sign, like this:
ssh MY-HOST+backuppc@my-host

but cannot get BackupPC to log in to this local account. I've tried prepending 
the above to RsyncSshArgs:
$sshPath -l MY-HOST+backuppc

but without success. The logs indicate that it tried to use:
/usr/bin/ssh\ -l\ MY-HOST+backuppc@my-host

I've tried escaping the + but then I get:
/usr/bin/ssh\ -l\ MY-HOST\\+backuppc@my-host
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump stalls with SIGTTOU on Windows client using rsync

2019-09-05 Thread Mike Hughes
On Wed, 2019-09-04 at 23:00 -0700, Craig Barratt via BackupPC-users wrote:
In addition to the higher log level, it would be helpful to see the rsync 
command being run.  Is there anything in the XferLOG file?

Craig

On Wed, Sep 4, 2019 at 6:44 PM Michael Huntley 
mailto:mich...@huntley.net>> wrote:
Perhaps cranking up log level to 8 or 9 may help.

Mph
---

Thanks for the responses gents. I see the error now (not sure if it was staring 
me in the face before raising the loglevel) but there were connection issues:

bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0, KeepOldAttribFiles = 0
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password,keyboard-interactive).
rsync_bpc: connection unexpectedly closed (0 bytes received so far) [Receiver]

I was testing ssh using a special definition on the command line:

ssh MY-HOST+backuppc@my-host

but didn't provide those details to the client. Is there something in cygwin's 
sshd config file I can set to make it use local accounts by default?
Or do I need to find the correct syntax in BackupPC to indicate the local 
account on the RsyncSshArgs line?

Hopefully I'll get this sorted soon and share the fix.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump stalls with SIGTTOU on Windows client using rsync

2019-09-04 Thread Mike Hughes
No responses. Too much detail? Let me rephrase it:

Windows rsync backup no worky!
plz halp!!!
:-D

On Thu, 2019-08-15 at 14:44 -0500, Mike Hughes wrote:
Working to add a few Windows clients to our BackupPC system. I have 
passwordless ssh working and when I try kicking off a backup using the GUI it 
fails immediately.
Logs show:
2019-08-15 11:09:29 full backup started for directory /cygdrive/c
2019-08-15 11:09:31 Got fatal error during xfer (No files dumped for share 
/cygdrive/c)
2019-08-15 11:09:36 Backup aborted (No files dumped for share /cygdrive/c)

'BackupPC_dump -f -v ' from the command line looks like it's running 
and the log indicates "full backup started for directory /cygdrive/c" but it 
hangs there.
Looking at the XFER PIDs I see:
# strace -p 21912
strace: Process 21912 attached
--- stopped by SIGTTOU ---

When I ctrl-c the inactive process these are some of the messages dumped to the 
screen:
^C^Cexiting after signal INT
__bpc_progress_state__ fail cleanup
BackupFailCleanup: nFilesTotal = 0, type = full, BackupCase = 1, inPlace = 1, 
lastBkupNum =
Removing empty backup #0
__bpc_progress_state__ delete #0
cmdSystemOrEval: about to system /usr/share/BackupPC/bin/BackupPC_backupDelete 
-h  -n 0 -l

BackupPC version: 4.3.1
rsync_bpc: version 3.1.2.0  protocol version 31

Client: Windows Server 2016
Cygwin64:
$ uname -r
3.0.7(0.338/5/3)
openssh 8.0p1-2
rsync 3.1.2-1

Any help appreciated!
Thanks!

--

Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] DumpPreUserCmd logging stopped after updating to 4.3.1

2019-08-28 Thread Mike Hughes
Following up on this, I just realized that the output from DumpPreUserCmd is 
being stored under the XferLOG and the errors logs. If I open the backup's 
primary log file it only shows the rsync activity, but all my dumpPreUserCmd 
output is still being written to the detailed log files. I still need to 
confirm that the backup will hard-fail on an error from my user cmd, which is 
required for me to be aware that a database or table dump was not successful.

On Thu, 2019-08-15 at 14:54 -0500, Mike Hughes wrote:
I use a DumpPreUserCmd to kick off a database dump script. This script used to 
log its actions within each server's BackupPC log so I could verify which 
databases and tables were dumped, which were skipped, etc.
It looks like a change was made which stops these logs from being written to 
the BackupPC logfile:
https://github.com/backuppc/backuppc/issues/285

I guess my question is, was this intentional? Do I need to manage my process 
logfile outside of BackupPC?

Second question: are we still sending a termination signal if 
UserCmdCheckStatus exits with a failure?

Thanks!

--

Mike

--

Mike Hughes mailto:m...@visionary.com>>
Visionary Services, inc.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] DumpPreUserCmd logging stopped after updating to 4.3.1

2019-08-15 Thread Mike Hughes
I use a DumpPreUserCmd to kick off a database dump script. This script used to 
log its actions within each server's BackupPC log so I could verify which 
databases and tables were dumped, which were skipped, etc.
It looks like a change was made which stops these logs from being written to 
the BackupPC logfile:
https://github.com/backuppc/backuppc/issues/285

I guess my question is, was this intentional? Do I need to manage my process 
logfile outside of BackupPC?

Second question: are we still sending a termination signal if 
UserCmdCheckStatus exits with a failure?

Thanks!

--

Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_dump stalls with SIGTTOU on Windows client using rsync

2019-08-15 Thread Mike Hughes
Working to add a few Windows clients to our BackupPC system. I have 
passwordless ssh working and when I try kicking off a backup using the GUI it 
fails immediately.
Logs show:
2019-08-15 11:09:29 full backup started for directory /cygdrive/c
2019-08-15 11:09:31 Got fatal error during xfer (No files dumped for share 
/cygdrive/c)
2019-08-15 11:09:36 Backup aborted (No files dumped for share /cygdrive/c)

'BackupPC_dump -f -v ' from the command line looks like it's running 
and the log indicates "full backup started for directory /cygdrive/c" but it 
hangs there.
Looking at the XFER PIDs I see:
# strace -p 21912
strace: Process 21912 attached
--- stopped by SIGTTOU ---

When I ctrl-c the inactive process these are some of the messages dumped to the 
screen:
^C^Cexiting after signal INT
__bpc_progress_state__ fail cleanup
BackupFailCleanup: nFilesTotal = 0, type = full, BackupCase = 1, inPlace = 1, 
lastBkupNum =
Removing empty backup #0
__bpc_progress_state__ delete #0
cmdSystemOrEval: about to system /usr/share/BackupPC/bin/BackupPC_backupDelete 
-h  -n 0 -l

BackupPC version: 4.3.1
rsync_bpc: version 3.1.2.0  protocol version 31

Client: Windows Server 2016
Cygwin64:
$ uname -r
3.0.7(0.338/5/3)
openssh 8.0p1-2
rsync 3.1.2-1

Any help appreciated!
Thanks!

--

Mike
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BACKUPPC V4 - rsync-bpc error connecting to client

2019-06-05 Thread Mike Hughes
On Wed, 2019-06-05 at 11:51 -0600, David Wynn via BackupPC-users wrote:
> debug1: Sending command: 192.168.1.6 rsync --server --sender
> -slHogDtprcxe.iLsfxC
> sh: 192.168.1.6: not found

This line tells us that your shell (sh) is trying to run the command:
"192.168.1.6" That looks like an IP address rather than a command. Did
you modify the RsyncBackupPCPath variable?

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Xfer errors in "Failures that need attention"

2019-03-13 Thread Mike Hughes
>From: Karol Jędrzejczyk 
>I'm using BackupPC to back up a bunch of servers only. In this scenario
>any transfer error is critical. 

Hi Karol,
I like to receive reports of any failures so I install BackupPC_report [1] on 
the BackupPC server then I explore the logs from any that error out. I call it 
with these cronjobs to reduce the amount of clutter in my inbox:

#Ansible: BackupPC - Send report Monday morning
5 08 * * 1 /usr/share/BackupPC/bin/BackupPC_report.pl | mail -r `hostname 
-s`@`hostname -d` -s "BackupPC@`hostname -s` weekly report" 
my_team@my_domain.com
#Ansible: BackupPC - Send report on error
5 08 * * * /usr/share/BackupPC/bin/BackupPC_report.pl -s | mail -E -r `hostname 
-s`@`hostname -d` -s "BackupPC@`hostname -s` warning" my_team@my_domain.com

[1] https://github.com/moisseev/BackupPC_report 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] IO error encountered -- skipping file deletion

2018-12-17 Thread Mike Hughes
Hi Daniel,

It sounds like the exclusion rules aren’t working as you expect. If they were I 
don’t think you’d see the errors even if the pool files had a problem since 
they’d never be compared. I ran into problems when I set up mine too. If I 
recall my confusion was around identifying the share name in the exclusion 
rule. I often use an asterisk to cover all shares so my exclusion statements 
look like this:

$Conf{BackupFilesExclude} = {
  '*' => [
'/ var/log/pve/tasks’,
'/ var/log/pve/etc’
  ]
};

Hope this helps!

From: Daniel Berteaud 
Sent: Monday, December 17, 2018 02:50
To: General list for user discussion, questions and support 

Subject: [BackupPC-users] IO error encountered -- skipping file deletion


Hi.



I'm running BackupPC v4.3.0 on CentOS 7 for a bit more than 200 hosts. Most are 
using rsync Xfer method and are working fine.

But for two of them, I started last week to have respectively 3 and 6 xfer 
errors, on every new backup (no matter if I start a new complete, or incr 
backup)

Eg:



[ 57 lignes sautées ]

file has vanished: 
"/var/log/pve/tasks/6/UPID:pve3:597E:01554D97:5C0C5076:vzdump::root@pam:"

IO error encountered -- skipping file deletion

[ 82 lignes sautées ]

rsync_bpc: fstat 
"/var/log/pve/tasks/6/UPID:pve3:597E:01554D97:5C0C5076:vzdump::root@pam:" 
failed: No such file or directory (2)

[ 5 lignes sautées ]

rsync_bpc: fstat 
"/var/log/pve/tasks/E/UPID:pve3:5875:0162BE48:5C0C72DE:aptupdate::root@pam:"
 failed: No such file or directory (2)

I did had those files when the first backup reporting the errors ran. But they 
do not even exist anymore now. I've also tried to exclude the impacted 
directories, but the issue remains. So I guess there's something wrong on the 
pool. How can I check this ? And how can I fix it so it doesn't report Xfer 
errors anymore ?



Regards,

Daniel
--

[Logo FWS]

Daniel Berteaud

FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
www.firewall-services.com


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error when try to upgrade to version 4.2.1

2018-11-26 Thread Mike Hughes
Yeah Ed, sorry this isn't clearer on the main page but Craig discourages 
building from source. If you're on Cent/RHEL, use this repo instead:
https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/

From: Ed Burgstaler 
Sent: Monday, November 26, 2018 12:38
To: backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Error when try to upgrade to version 4.2.1

I have just attempted to install/update to version 4.2.1 from 3.3.2 and when I 
try to do so:
./configure.pl
I get the following error:
BackupPC needs the package version.  Please install version before installing 
BackupPC.

I have complied and installed BackupPC-XS-0.58 as well prior to updating.

I don't understand what that means can anyone help?


-Ed
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large files with small changes

2018-11-20 Thread Mike Hughes
Hi Steve,

It looks like they are stored using reverse deltas. Maybe you’ve already seen 
this from the V4.0 documentation:

  *   Backups are stored as "reverse deltas" - the most recent backup is always 
filled and older backups are reconstituted by merging all the deltas starting 
with the nearest future filled backup and working backwards.
This is the opposite of V3 where incrementals are stored as "forward deltas" to 
a prior backup (typically the last full backup or prior lower-level incremental 
backup, or the last full in the case of rsync).

  *   Since the most recent backup is filled, viewing/restoring that backup 
(which is the most common backup used) doesn't require merging any deltas from 
other backups.
  *   The concepts of incr/full backups and unfilled/filled storage are 
decoupled. The most recent backup is always filled. By default, for the 
remaining backups, full backups are filled and incremental backups are 
unfilled, but that is configurable.
Additionally these tips might help apply deltas to the files and reduce 
transfer bandwidth:

MySQL dump has an option  ‘--order-by-primary’ which sorts before/while dumping 
the database. Useful if you’re trying to limit the amount to be rsync’ed. 
You’ll need to evaluate the usefulness of this based on db design.

If you’re compressing your database look into the “--rsyncable” option 
available in the package pigz.

From: Steve Richards 
Sent: Tuesday, November 20, 2018 04:34
To: backuppc-users@lists.sourceforge.net
Subject: [BackupPC-users] Large files with small changes


I think some backup programs are able to store just the changes ("deltas") in a 
file when making incrementals. Am I right in thinking that BackupPC doesn't do 
this, and would instead store the whole of each changed file as separate 
entries in the pool?

Reason for asking is that I want to implement a backup strategy for databases, 
which is likely to involve multi-megabyte SQL files that differ only slightly 
from day to day. I'm trying to decide how best to handle them.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC administrative attention needed email incorrect?

2018-10-30 Thread Mike Hughes
>Jamie Burchell wrote on 2018-10-30 09:31:13 - [[BackupPC-users] BackupPC
>administrative attention needed email incorrect?]:
>> [...]
>> Yesterday, I received the following email from the BackupPC process:
>> [...]
>> > Yesterday 156 hosts were skipped because the file system containing
>> > /var/lib/BackupPC/ was too full.  [...]
>>
>> The email was correct in that disk space was low, but the number of
>> reported ???hosts skipped??? doesn???t seem right. I have 39 hosts, 152 full
>> backups and 952 incrementals. The email says they were skipped, but there
>> are no gaps that I can see in any of the backups. Just wondering if this is
>> a bug.
>
>without looking into the code, 156 seems to be 4 * 39 - could it be that
>after 4 wakeups disk space dropped low enough for backups to resume (by
>backup expiration or someone deleting something from the partition)? That
>would explain that there is no gap. You just might find the backups happened
>at a slightly later point in time than you would normally expect.
>
>Hope that helps.
>
>Regards,
>Holger

that makes perfect sense. That wording has caught me off-guard too. I think a 
more accurate phrasing would be:
"Yesterday 156 backups were skipped..."

Anyone care to show me how I can submit a simple patch for this in github? The 
obvious option is to click the pencil icon to edit, but it says I'd be forking 
the whole project.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] deleted backups then host and pc folder yet data remains in pool

2018-10-23 Thread Mike Hughes
You are correct – I changed some defaults – that one to ‘8’. So if I deleted 
that host 7 days ago, I can expect the pool to be gone in a day or two. I’ll be 
patient.
Thank you Craig!

From: Craig Barratt via BackupPC-users 
Sent: Monday, October 22, 2018 20:52
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] deleted backups then host and pc folder yet data 
remains in pool

What is your setting for $Conf{BackupPCNightlyPeriod}?  The default is 1, but 
if it's set to something larger then it will take that number of days/nights 
for the entire pool to be traversed.

It does appear that some of the files are getting deleted - yesterday afternoon 
it deleted 242189 files (although the total size wasn't very large), and 
2149846 remain.  That could be reasonable after 6 days with 
$Conf{BackupPCNightlyPeriod} set to 16.

Craig

On Mon, Oct 22, 2018 at 1:21 PM Mike Hughes 
mailto:m...@visionary.com>> wrote:
A host was created to duplicate the cpool from another BackupPC server. It was 
set to skip compression and successfully filled the uncompressed pool with 
~300GB of data. On the advice of others, I decided to use rsync in a cronjob 
instead, so my intent was to delete this host and its data.

First I went into the host's backup summary and clicked "Delete" for each of 
the four backups that were listed. Then I followed the documentation under 
"Other installation topics - Removing a client" where it said to: "remove its 
entry in the conf/hosts file, and then delete the /var/lib/BackupPC//pc/$host 
directory."

The data still resides in the uncompressed pool - this is six days after I 
performed the above actions:

* Uncompressed pool:
  o Pool is 306.65GiB comprising 2149846 files and 16512 directories (as of 
10/21 13:06),
  o Pool hashing gives 0 repeated files with longest chain 0,
  o Nightly cleanup removed 242189 files of size 0.94GiB (around 10/21 13:06),
* Compressed pool:
  o Pool is 141.48GiB comprising 1570628 files and 16512 directories (as of 
10/21 13:06),
  o Pool hashing gives 0 repeated files with longest chain 0,
  o Nightly cleanup removed 4949 files of size 26.05GiB (around 10/21 13:06),
* Pool file system was recently at 46% (10/22 08:00), today's max is 46% (10/21 
13:00) and yesterday's max was 46%.

I'm concerned I may have created a race-condition when I asked the system to 
delete the individual backups then removed the client and pc folders before it 
could complete. This was the only host set to not use compression and I believe 
the full 306GB in pool is solely for this host. Is it safe to delete the 
contents of pool?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] deleted backups then host and pc folder yet data remains in pool

2018-10-22 Thread Mike Hughes
A host was created to duplicate the cpool from another BackupPC server. It was 
set to skip compression and successfully filled the uncompressed pool with 
~300GB of data. On the advice of others, I decided to use rsync in a cronjob 
instead, so my intent was to delete this host and its data.

First I went into the host's backup summary and clicked "Delete" for each of 
the four backups that were listed. Then I followed the documentation under 
"Other installation topics - Removing a client" where it said to: "remove its 
entry in the conf/hosts file, and then delete the /var/lib/BackupPC//pc/$host 
directory."

The data still resides in the uncompressed pool - this is six days after I 
performed the above actions:

* Uncompressed pool:
  o Pool is 306.65GiB comprising 2149846 files and 16512 directories (as of 
10/21 13:06),
  o Pool hashing gives 0 repeated files with longest chain 0,
  o Nightly cleanup removed 242189 files of size 0.94GiB (around 10/21 13:06),
* Compressed pool:
  o Pool is 141.48GiB comprising 1570628 files and 16512 directories (as of 
10/21 13:06),
  o Pool hashing gives 0 repeated files with longest chain 0,
  o Nightly cleanup removed 4949 files of size 26.05GiB (around 10/21 13:06),
* Pool file system was recently at 46% (10/22 08:00), today's max is 46% (10/21 
13:00) and yesterday's max was 46%.

I'm concerned I may have created a race-condition when I asked the system to 
delete the individual backups then removed the client and pc folders before it 
could complete. This was the only host set to not use compression and I believe 
the full 306GB in pool is solely for this host. Is it safe to delete the 
contents of pool?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-14 Thread Mike Hughes
Thanks for the information Ed. I figured I could leave the '-z' off the rsync 
command.

Regarding parallel backups: I see your point of chains exposing the potential 
to nuke all backups but aren't you increasing the exposure of your production 
system X2 by giving another backup process access to it? Just curious on your 
thoughts on that since you seem to have been down this road.


From: ED Fochler 
Sent: Sunday, October 14, 2018 10:23:13 AM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] syncing local and cloud backups

I can answer the rsync compression question.  no.  Running gzip'd data through 
gzip is a waste of CPU power.  Depending on your link and CPU speed it may even 
slow down your ability to transfer data.

As for the recovery from an rsync'd backup...
If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks to 
other locations, you can easily shut down BackupPC, swap links, and start it 
up.  So long as both systems are running the same version, it should come up 
cleanly.

I gave up backing up the backup server though.  If you want proper redundancy 
you run backups in parallel, not in a chain.  If one backup server has access 
to the other backup server, then it has the potential (if compromised) to 
destroy all of your backups and originals from one location.  Redundant backups 
should live in separate private enclaves.

ED.



> On 2018, Oct 13, at 8:52 PM, Mike Hughes  wrote:
>
> Another related question: Does it make sense to use rsync's compression when 
> transferring cpool? If that data is already compressed, am I gaining much by 
> having rsync try to compress it again?
> Thanks!
> From: Mike Hughes 
> Sent: Friday, October 12, 2018 8:25 AM
> To: General list for user discussion, questions and support
> Cc: Craig Barratt
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>
> Cool, thanks for the idea Craig. So that will provide a backup of the entire 
> cpool and associated metadata necessary to rebuild hosts in the event of a 
> site loss, but what would that process look like?
>
> Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
> would the recovery process look like? From what I’m thinking I’d have to 
> rsync the entire folder back to the destination site, do a fresh install of 
> BackupPC and associate it with this new folder. Is that about right? Would 
> there not be a method to extract an important bit of data from the cpool 
> without performing an entire site restore? I’m considering the situation 
> where I have data of separate priority. That one cpool might contain several 
> TB of files along with a few important servers of higher priority. The only 
> option looks like a full site restore after rsyncing everything back. Am I 
> thinking of this correctly?
>
> From: Craig Barratt via BackupPC-users 
> Sent: Thursday, October 11, 2018 20:01
> To: General list for user discussion, questions and support 
> 
> Cc: Craig Barratt 
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>
> I'd recommend just using rsync if you want to make a remote copy of the 
> cpool, pc and conf directories, to a place that BackupPC doesn't back up.
>
> Craig
>
> On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes  wrote:
> Hi BackupPC users,
>
> Similar questions have come up a few times but I have not found anything 
> relating to running multiple pools. Here's our setup:
> - On-prem dev servers backed up locally to BackupPC (4.x)
> - Prod servers backed up in the cloud to a separate BackupPC (4.x) instance
>
> I'd like to provide disaster recovery options by syncing the dedup'd pools 
> from on-prem to cloud and vice-versa but this would create an infinite loop. 
> Is it possible to place the off-site data into a separate cpool which I could 
> exclude from the sync? It would also be nice to be able to extract files from 
> the synced pool individually without having to pull down the whole cpool and 
> reproducing the entire BackupPC server.
>
> How do others manage on-prem and off-site backup synchronization?
> Thanks,
> Mike
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/




Re: [BackupPC-users] syncing local and cloud backups

2018-10-13 Thread Mike Hughes
Another related question: Does it make sense to use rsync's compression when 
transferring cpool? If that data is already compressed, am I gaining much by 
having rsync try to compress it again?

Thanks!

From: Mike Hughes 
Sent: Friday, October 12, 2018 8:25 AM
To: General list for user discussion, questions and support
Cc: Craig Barratt
Subject: Re: [BackupPC-users] syncing local and cloud backups


Cool, thanks for the idea Craig. So that will provide a backup of the entire 
cpool and associated metadata necessary to rebuild hosts in the event of a site 
loss, but what would that process look like?



Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
would the recovery process look like? From what I’m thinking I’d have to rsync 
the entire folder back to the destination site, do a fresh install of BackupPC 
and associate it with this new folder. Is that about right? Would there not be 
a method to extract an important bit of data from the cpool without performing 
an entire site restore? I’m considering the situation where I have data of 
separate priority. That one cpool might contain several TB of files along with 
a few important servers of higher priority. The only option looks like a full 
site restore after rsyncing everything back. Am I thinking of this correctly?



From: Craig Barratt via BackupPC-users 
Sent: Thursday, October 11, 2018 20:01
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] syncing local and cloud backups



I'd recommend just using rsync if you want to make a remote copy of the cpool, 
pc and conf directories, to a place that BackupPC doesn't back up.



Craig



On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:

Hi BackupPC users,

Similar questions have come up a few times but I have not found anything 
relating to running multiple pools. Here's our setup:
- On-prem dev servers backed up locally to BackupPC (4.x)
- Prod servers backed up in the cloud to a separate BackupPC (4.x) instance

I'd like to provide disaster recovery options by syncing the dedup'd pools from 
on-prem to cloud and vice-versa but this would create an infinite loop. Is it 
possible to place the off-site data into a separate cpool which I could exclude 
from the sync? It would also be nice to be able to extract files from the 
synced pool individually without having to pull down the whole cpool and 
reproducing the entire BackupPC server.

How do others manage on-prem and off-site backup synchronization?
Thanks,
Mike


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-12 Thread Mike Hughes
Cool, thanks for the idea Craig. So that will provide a backup of the entire 
cpool and associated metadata necessary to rebuild hosts in the event of a site 
loss, but what would that process look like?

Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
would the recovery process look like? From what I’m thinking I’d have to rsync 
the entire folder back to the destination site, do a fresh install of BackupPC 
and associate it with this new folder. Is that about right? Would there not be 
a method to extract an important bit of data from the cpool without performing 
an entire site restore? I’m considering the situation where I have data of 
separate priority. That one cpool might contain several TB of files along with 
a few important servers of higher priority. The only option looks like a full 
site restore after rsyncing everything back. Am I thinking of this correctly?

From: Craig Barratt via BackupPC-users 
Sent: Thursday, October 11, 2018 20:01
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] syncing local and cloud backups

I'd recommend just using rsync if you want to make a remote copy of the cpool, 
pc and conf directories, to a place that BackupPC doesn't back up.

Craig

On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Hi BackupPC users,

Similar questions have come up a few times but I have not found anything 
relating to running multiple pools. Here's our setup:
- On-prem dev servers backed up locally to BackupPC (4.x)
- Prod servers backed up in the cloud to a separate BackupPC (4.x) instance

I'd like to provide disaster recovery options by syncing the dedup'd pools from 
on-prem to cloud and vice-versa but this would create an infinite loop. Is it 
possible to place the off-site data into a separate cpool which I could exclude 
from the sync? It would also be nice to be able to extract files from the 
synced pool individually without having to pull down the whole cpool and 
reproducing the entire BackupPC server.

How do others manage on-prem and off-site backup synchronization?
Thanks,
Mike


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] syncing local and cloud backups

2018-10-11 Thread Mike Hughes
Hi BackupPC users,

Similar questions have come up a few times but I have not found anything 
relating to running multiple pools. Here's our setup:
- On-prem dev servers backed up locally to BackupPC (4.x)
- Prod servers backed up in the cloud to a separate BackupPC (4.x) instance

I'd like to provide disaster recovery options by syncing the dedup'd pools from 
on-prem to cloud and vice-versa but this would create an infinite loop. Is it 
possible to place the off-site data into a separate cpool which I could exclude 
from the sync? It would also be nice to be able to extract files from the 
synced pool individually without having to pull down the whole cpool and 
reproducing the entire BackupPC server.

How do others manage on-prem and off-site backup synchronization? 
Thanks,
Mike


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.2.1 apparently in an infinite loop.

2018-09-16 Thread Mike Hughes
Michael,

Condescending and belittling treatment of others in this list is not the norm. 
Your personal attacks are unwarranted and unhelpful.


Since you reached out and asked for help in relating to others I will share 
with you this piece from Dr. Phil Agre which helped me immensely when I was 
learning the ropes at the helpdesk and I still refer to it regularly to help 
keep me on track:


"How to help someone use a computer.


Computer people are generally fine human beings, but nonetheless they do a lot 
of inadvertent harm in the ways they "help" other people with their computer 
problems. Now that we're trying to get everyone on the net, I thought it might 
be helpful to write down everything I've been taught about helping people use 
computers.

First you have to tell yourself some things:

  1.  Nobody is born knowing this stuff.
  2.  You've forgotten what it's like to be a 
beginner.
  3.  If it's not obvious to them, it's not obvious.
  4.  A computer is a means to an end. The person you're helping probably cares 
mostly about the end. This is reasonable.

Their knowledge of the computer is grounded in what they can do and see -- 
"when I do this, it does that". They need to develop a deeper understanding, of 
course, but this can only happen slowly, and not through abstract theory but 
through the real, concrete situations they encounter in their work.

By the time they ask you for help, they've probably tried several different 
things. As a result, their computer might be in a strange state. This is 
natural.

The best way to learn is through apprenticeship -- that is, by doing some real 
task together with someone who has skills that you don't have.

Your primary goal is not to solve their problem. Your primary goal is to help 
them become one notch more capable of solving their problem on their own. So 
it's okay if they take notes.

Most user interfaces are terrible. When people make mistakes it's usually the 
fault of the interface. You've forgotten how many ways you've learned to adapt 
to bad interfaces. You've forgotten how many things you once assumed that the 
interface would be able to do for you.

Knowledge lives in communities, not individuals. A computer user who's not part 
of a community of computer users is going to have a harder time of it than one 
who is.

Having convinced yourself of these things, you are more likely to follow some 
important rules:

  *   Don't take the keyboard. Let them do all the typing, even if it's slower 
that way, and even if you have to point them to each and every key they need to 
type. That's the only way they're going to learn from the interaction.
  *   Find out what they're really trying to do. Is there another way to go 
about it?
  *   Attend to the symbolism of the interaction. Try to squat down so your 
eyes are just below the level of theirs. When they're looking at the computer, 
look at the computer. When they're looking at you, look back at them.
  *   Explain your thinking. Don't make it mysterious. If something is true, 
show them how they can see it's true. When you don't know, say "I don't know".  
When you're guessing, say "let's try ... because ...". Resist the temptation to 
appear all-knowing. Help them learn to think like you.
  *   Be aware of how abstract your language is. For example, "Get into the 
editor" is abstract and "press this key" is concrete. Don't say anything unless 
you intend for them to understand it. Keep adjusting your language downward 
towards concrete units until they start to get it, then slowly adjust back up 
towards greater abstraction so long as they're following you. When formulating 
a take-home lesson ("when it does this and that, you should check 
such-and-such"), check once again that you're using language of the right 
degree of abstraction for this user right now.
  *   Whenever they start to blame themselves, blame the computer, no matter 
how many times it takes, in a calm, authoritative tone of voice. If you need to 
show off, show off your ability to criticize the bad interface. When they get 
nailed by a false assumption about the computer's behavior, tell them their 
assumption was reasonable. Tell *yourself* that it was reasonable.
  *   Formulate a take-home lesson.
  *   Take a long-term view. Who do users in this community get help from? If 
you focus on building that person's skills, the skills will diffuse outward to 
everyone else.
  *   Never do something for someone that they are capable of doing for 
themselves.

* Don't say "it's in the manual". (You probably knew that.)"


Source: http://polaris.gseis.ucla.edu/pagre/how-to-help.html

This is just a basic framework to keep in mind when helping others learn 
something that you already know. Adapt it as necessary to fit the situation.

I hope this helps.

Sincerely,
Mike



From: Michael Stowe 
Sent: Sunday, September 16, 2018 11:39:51 AM
To: 

Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size requirements?

2018-08-28 Thread Mike Hughes
The partition where the "pc" folder lives needs to be larger than the size of 
your largest backup target. It will grow to the size of whatever is being 
backed up at that moment. If you have simultaneous backups, I believe it could 
grow larger than your largest backup target.

Here is how I came to that conclusion: I have been monitoring changes in the 
size of the "pc" folder every second to learn how to anticipate disk usage and 
build a better server. These are the highs and lows experienced over a typical 
evening:

01:00:29   0G
01:04:30   2G
01:07:57   1G
01:11:36   6G
01:22:16   0G
01:27:13   1G
01:31:41   0G
01:53:22   25G
02:12:39   0G
02:31:10   23G
03:05:59   0G

This was measured while backing up seven RHEL web/mysql servers on a LAN; some 
with large database dumps. All backups triggered at 1:00a and most completed in 
20 minutes. The other two servers performed full backups and took about two 
hours each, and must be responsible for the 20GB spikes. I can confirm by 
looking at the "File Size/Count Reuse Summary" table that their sizes are 
20-25GB. Add to that the potential of simultaneous runs so be sure your "pc" 
folder is on a large enough partition.

From: Mike Hughes
Sent: Friday, August 24, 2018 09:56
To: backuppc-users@lists.sourceforge.net
Subject: RE: [BackupPC-users] ver 4.x split using ssd and hdd storage - size 
requirements?


>-Original Message-

>From: Johan Ehnberg mailto:jo...@molnix.com>>

>Sent: Friday, August 24, 2018 09:25

>To: 
>backuppc-users@lists.sourceforge.net<mailto:backuppc-users@lists.sourceforge.net>

>Subject: Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size 
>requirements?

>

>On 08/24/2018 04:52 PM, Mike Hughes wrote:

>> I think I've discovered a new level of failure.

...

>> Can't write new host pool count file

>> /var/lib/BackupPC//pc/hostname/refCnt/poolCntNew.1.02

>>

>> My guess is that the /var/lib/BackupPC/pc partition is the problem.

...

>> https://molnix.com/backuppc-version-4-development-allows-better-scaling/

>>

>

>Hi Mike,

>

>Post author here, nice to hear it is of use!



Great! Nice to see you again John!



>Can you check the size of the files that trigger the errors? What does

>df -h tell you during backups? If the failed refcount files linger on

>your drive, how big do they grow?



That was a really good idea and I think you nailed it. Here's a one-second 
update on used space for the /var partition during an incremental backup:



[Cent-7:root@hostname ~]# while printf '%s ' "$(df -P /var | awk 'NR==2 { print 
$(NF-1) }')"; do sleep 1; done

28% 28% 28% 28% 28% 28% 28% 29% 29% 31% 31% 32% 34% 34% 35% 35% 35% 35% 35% 35% 
36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 38% 40% 42% 43% 43% 43% 44% 44% 
44% 44% 44% 45% 45% 45% 45% 45% 45% 45% 45% 45% 45% 45% 46% 46% 46% 46% 46% 46% 
46% 49% 52% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 
54% 57% 59% 61% 61% 61% 62% 62% 62% 62% 62% 62% 62% 63% 63% 63% 63% 63% 63% 63% 
63% 63% 63% 63% 63% 63% 63% 63% 63% 63% 63% 64% 67% 69% 70% 70% 71% 71% 71% 71% 
71% 71% 71% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 
72% 72% 72% 72% 76% 79% 80% 80% 80% 80% 80% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 29% 31% 31% 32% 33% 36% 36% 
36% 36% 37% 37% 37% 37% 37% 37% 37% 37% 38% 38% 38% 38% 38% 38% 38% 38% 39% 39% 
39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 41% 42% 43% 43% 44% 46% 48% 
48% 48% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 
50% 53% 56% 57% 57% 57% 58% 58% 58% 58% 58% 58% 58% 59% 59% 59% 59% 59% 59% 59% 
59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 60% 63% 65% 67% 67% 68% 
68% 68% 68% 68% 69% 69% 69% 69% 69% 69% 69% 70% 70% 70% 70% 70% 71% 71% 71% 71% 
71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 72% 76% 79% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
84% 86% 89% 89% 89% 90% 90% 90% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 
91% 91% 91% 91% 91% 91% 92% 92% 92% 92% 92% 92% 92% 92% 92% 94% 96% 98% 99% 
100% 100% 100% 100% 100% 100% 100% 93% 93% 28% 28%



If you're able to receive images:

[cid:image001.png@01D43B8F.5C49A780]

caption: Ne

Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size requirements?

2018-08-24 Thread Mike Hughes
>-Original Message-

>From: Johan Ehnberg 

>Sent: Friday, August 24, 2018 09:25

>To: backuppc-users@lists.sourceforge.net

>Subject: Re: [BackupPC-users] ver 4.x split using ssd and hdd storage - size 
>requirements?

>

>On 08/24/2018 04:52 PM, Mike Hughes wrote:

>> I think I've discovered a new level of failure.

...

>> Can't write new host pool count file

>> /var/lib/BackupPC//pc/hostname/refCnt/poolCntNew.1.02

>>

>> My guess is that the /var/lib/BackupPC/pc partition is the problem.

...

>> https://molnix.com/backuppc-version-4-development-allows-better-scaling/

>>

>

>Hi Mike,

>

>Post author here, nice to hear it is of use!



Great! Nice to see you again John!



>Can you check the size of the files that trigger the errors? What does

>df -h tell you during backups? If the failed refcount files linger on

>your drive, how big do they grow?



That was a really good idea and I think you nailed it. Here's a one-second 
update on used space for the /var partition during an incremental backup:



[Cent-7:root@hostname ~]# while printf '%s ' "$(df -P /var | awk 'NR==2 { print 
$(NF-1) }')"; do sleep 1; done

28% 28% 28% 28% 28% 28% 28% 29% 29% 31% 31% 32% 34% 34% 35% 35% 35% 35% 35% 35% 
36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 36% 38% 40% 42% 43% 43% 43% 44% 44% 
44% 44% 44% 45% 45% 45% 45% 45% 45% 45% 45% 45% 45% 45% 46% 46% 46% 46% 46% 46% 
46% 49% 52% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 54% 
54% 57% 59% 61% 61% 61% 62% 62% 62% 62% 62% 62% 62% 63% 63% 63% 63% 63% 63% 63% 
63% 63% 63% 63% 63% 63% 63% 63% 63% 63% 63% 64% 67% 69% 70% 70% 71% 71% 71% 71% 
71% 71% 71% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 72% 
72% 72% 72% 72% 76% 79% 80% 80% 80% 80% 80% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 29% 31% 31% 32% 33% 36% 36% 
36% 36% 37% 37% 37% 37% 37% 37% 37% 37% 38% 38% 38% 38% 38% 38% 38% 38% 39% 39% 
39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 39% 41% 42% 43% 43% 44% 46% 48% 
48% 48% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 49% 
50% 53% 56% 57% 57% 57% 58% 58% 58% 58% 58% 58% 58% 59% 59% 59% 59% 59% 59% 59% 
59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 59% 60% 63% 65% 67% 67% 68% 
68% 68% 68% 68% 69% 69% 69% 69% 69% 69% 69% 70% 70% 70% 70% 70% 71% 71% 71% 71% 
71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 71% 72% 76% 79% 
81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 81% 
84% 86% 89% 89% 89% 90% 90% 90% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 91% 
91% 91% 91% 91% 91% 91% 92% 92% 92% 92% 92% 92% 92% 92% 92% 94% 96% 98% 99% 
100% 100% 100% 100% 100% 100% 100% 93% 93% 28% 28%



If you're able to receive images:

[cid:image001.png@01D43B8F.5C49A780]

caption: NewRelic showing close to but not reaching 100% utilization



So it's definitely requiring more storage than I expected. I love the idea of 
using an SSD to increase performance but I need to better understand the growth 
potential during backups, especially during simultaneous backups.



>I will follow this thread with interest and hopefully I have the time to

>closely monitor what happens in the pc folder during reference counts.

>

>Best regards,

>Johan

>

>--

>Johan Ehnberg

>Founder, CEO

>jo...@molnix.com<mailto:jo...@molnix.com>

>+358503209688

>

>Molnix Oy

>molnix.com



Thank you!

Mike
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] ver 4.x split using ssd and hdd storage - size requirements?

2018-08-24 Thread Mike Hughes
I think I've discovered a new level of failure. It started off with these 
errors when attempting to rsync larger files:

rsync_bpc: failed to open 
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: No space 
left on device (28)
rsync_bpc: mkstemp 
"/home/localuser/mysql/.hostname-srv-sql.our_database.sql.00" failed: No 
space left on device (28)

Now all backups are failing and I see these in the Xfer Error logs:

BackupPC_refCountUpdate: can't write new pool count file 
/var/lib/BackupPC//pc/hostname/22/refCnt/poolCntNew.1.a8
BackupPC_refCountUpdate: given errors, redoing host hostname #22 with fsck 
(reset errorCnt to 0)
bpc_poolRefFileWrite: can't open/create pool delta file name 
/var/lib/BackupPC//pc/hostname/22/refCnt/tpoolCntDelta_1_-1_0_70674 (errno 28)
bpc_poolRefRequestFsck: can't open/create fsck request file 
/var/lib/BackupPC//pc/hostname/22/refCnt/needFsck70674 (errno 28)
Can't write new host pool count file 
/var/lib/BackupPC//pc/hostname/refCnt/poolCntNew.1.02

My guess is that the /var/lib/BackupPC/pc partition is the problem. I took 
advice from some rando on the interwebs [1] to put the pc folder on an ssd but 
perhaps it needs more headroom than suggested:

"... Split the storage up on SSD for the pc folder and something more cost 
efficient for cpool such as SMR drives. ...The pc folder in version 4 
essentially only contains the directory structures and references of files that 
they should contain, so it stays very small. However, it is often read from and 
speeds things up remarkably when it is served fast. Much more so than speeding 
up the cpool."

The "pc" folder lives in "/var/lib/BackupPC/pc" on a local ssd with 5GB 
overhead available. The storage pools live on a platter w/70+GB free. Am I 
short-changing the "pc" folder? Does it need to grow significantly during 
backup runs? If so, how much space is suggested?

Unfortunately my monitoring software (NewRelic Infrastructure) does not provide 
much granularity and has previously hidden similar spikes in usage so I don't 
trust its reports, which does not show any capacity violations.

Thank you!

[1] - https://molnix.com/backuppc-version-4-development-allows-better-scaling/
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] XFS journal location; pc folder on smaller SSD; rsync --inplace question

2018-08-22 Thread Mike Hughes
I'm seeing out-of-storage errors from rsync_bpc when transferring large (4 GB) 
files:

rsync_bpc: failed to open 
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: No space 
left on device (28)

The volume supporting the 'pc' folder has 6 GB free and lives an SSD under 
/var/lib/BackupPC whereas the pool folders are linked to a traditional hard 
disk with 78 GB free. The backup pool lives on a 100GB platter with a 
LUKS-encrypted LVM hosting an XFS filesystem. The 'pc' folder resides on the 
operating system's SSD:

# ls -lht /var/lib/BackupPC
total 4.0K
drwxr-x---. 9 backuppc backuppc 4.0K Aug 20 12:06 pc
lrwxrwxrwx. 1 backuppc backuppc   33 Aug 14 14:35 pool -> 
/mnt/backup-volume/BackupPC/pool/
lrwxrwxrwx. 1 backuppc backuppc   34 Aug 14 14:35 cpool -> 
/mnt/backup-volume/BackupPC/cpool/

So which system is it claiming is out of space? Full Xfer error log and other 
data follows:

Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name 
hostname-srv-sql --bpc-share-name / --bpc-bkup-num 1 --bpc-bkup-comp 3 
--bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 137627 
--bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ backuppc 
--rsync-path=sudo\ /usr/bin/rsync --super --recursive --protect-args 
--numeric-ids --perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L 
--stats --timeout=72000 --exclude=/sys --exclude=\*.vhd hostname-srv-sql:/ /
incr backup started for directory /
Xfer PIDs are now 129260
This is the rsync child about to exec /usr/bin/rsync_bpc

NOTICE: Use of this system is protected and monitored

This system is the property of blah blah blah
...

Xfer PIDs are now 129260,129338
xferPids 129260,129338
[ skipped 2 lines ]
rsync_bpc: failed to open 
"/home/localuser/mysql/hostname-srv-sql.our_database.sql", continuing: No space 
left on device (28)
[ skipped 4 lines ]
rsync_bpc: mkstemp 
"/home/localuser/mysql/.hostname-srv-sql.our_database.sql.00" failed: No 
space left on device (28)
[ skipped 99 lines ]
Done: 0 errors, 2 filesExist, 3001 sizeExist, 520 sizeExistComp, 0 filesTotal, 
0 sizeTotal, 23 filesNew, 2554137 sizeNew, 554933 sizeNewComp, 137627 inode

Relevant BackupPC system data:

backuppc-host: $ df -hP
Filesystem Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv  7.8G  182M  7.2G   3% /
devtmpfs   1.7G 0  1.7G   0% /dev
tmpfs  1.7G 0  1.7G   0% /dev/shm
tmpfs  1.7G  9.1M  1.7G   1% /run
tmpfs  1.7G 0  1.7G   0% /sys/fs/cgroup
/dev/mapper/rootvg-usrlv   9.8G  1.4G  7.9G  16% /usr
/dev/mapper/rootvg-optlv   2.0G  6.1M  1.8G   1% /opt
/dev/sda1  976M  138M  772M  16% /boot
/dev/mapper/rootvg-tmplv   2.0G  6.3M  1.8G   1% /tmp
/dev/mapper/rootvg-homelv  976M  3.1M  906M   1% /home
/dev/mapper/rootvg-varlv   7.8G  1.5G  5.9G  20% /var
/dev/mapper/luks96G   19G   78G  20% /mnt/backup-volume
/dev/sdb1  6.8G   32M  6.4G   1% /mnt/resource
tmpfs  344M 0  344M   0% /run/user/1001
tmpfs  344M 0  344M   0% /run/user/0

backuppc-host: $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
fd02:01   4K  0 disk
sda8:00  64G  0 disk
├─sda1 8:10   1G  0 part  /boot
└─sda2 8:20  63G  0 part
  ├─rootvg-tmplv 253:10   2G  0 lvm   /tmp
  ├─rootvg-usrlv 253:20  10G  0 lvm   /usr
  ├─rootvg-swaplv253:30   2G  0 lvm   [SWAP]
  ├─rootvg-optlv 253:40   2G  0 lvm   /opt
  ├─rootvg-homelv253:50   1G  0 lvm   /home
  ├─rootvg-varlv 253:60   8G  0 lvm   /var
  └─rootvg-rootlv253:70   8G  0 lvm   /
sdb8:16   0   7G  0 disk
└─sdb1 8:17   0   7G  0 part  /mnt/resource
sdc8:32   0  96G  0 disk
└─sdc1 8:33   0  96G  0 part
  └─luks-backup_disk 253:00  96G  0 lvm
└─luks   253:80  96G  0 crypt /mnt/backup-volume

hostname-srv: # df -hP
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda231G   14G   16G  47% /
tmpfs   3.4G 0  3.4G   0% /dev/shm
/dev/sda1   477M   97M  355M  22% /boot
/dev/sdb199G  2.1G   92G   3% /mnt/resource

Target system's database directory:

# ls -lha /home/localuser/mysql/
total 5.1G
drwxrwxr-x. 2 localuser localuser 4.0K May  3 01:00 .
drwxr-x---. 3 localuser localuser 4.0K Jan  4  2018 ..
-rw-rw-r--. 1 backuppc backuppc 3.9G Aug 22 01:03 
hostname-srv-sql.our_database.sql
-rw-rw-r--. 1 backuppc backuppc 1.2G May  2 01:02 

Re: [BackupPC-users] Which file system for data pool?

2018-08-14 Thread Mike Hughes
It’s a cloud service so I’m less concerned with the results of the thrashing 

From: Greg Harris 
Sent: Tuesday, August 14, 2018 08:53
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] Which file system for data pool?

You are probably already aware of this, but an SSD’s life expectancy is based 
on the number of writes.  Therefore, utilizing an SSD as the cache drive and 
the OS drive might require some additional considerations over the long haul.

Thanks,

Greg Harris


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Which file system for data pool?

2018-08-14 Thread Mike Hughes
>-Original Message-
>From: Johan Ehnberg 
>Sent: Tuesday, August 14, 2018 07:29
>To: backuppc-users@lists.sourceforge.net
>Subject: Re: [BackupPC-users] Which file system for data pool?
>
>
>On 08/14/2018 02:44 PM, Tapio Lehtonen wrote:
>> I'm building a BackupPC host, with two SSD disks and two 4 TB rotating
>> disks connected to LSI Logic / Symbios Logic MegaRAID SAS 2108
>> [Liberator] (rev 05). Operating system is Debian GNU/Linux 9.5. The plan
>> is to put OS on SSD disks with RAID1, and BackupPC data pool to rotating
>> disk with RAID1. SSD disks are 240 GB each, I'm open to suggestions how
>> to use part of them for cache or journal device or something.
>>
>> That disk controller has reasonably performant RAID with battery backup,
>> so I prefer using those features. Thus ZFS is not good, my understanding
>> is ZFS should be used with plain host bus adapters.
>>
>> I'm thinking XFS, so inode allocation is not a problem (previously I
>> asked in this mailing list how to recover from out of inodes). What I
>> read indicate XFS is equal or better than Ext4 for most features.
>>
>> I could not find recent recommendations for file system used with
>> BackupPC. Those old ones I found say ReiserFS is good, it probably is
>> but not much maintained recently.
>>
>> So, any recommendation for file system?
>>
>
>
>Terve Tapio,
>
>I would also choose XFS in your case since ZFS is not a good option for you.
>Furthermore, if you want to put the SSD:s to good use, you can put the
>'pc' folder on them if the following conditions are being met:
>
>- You are running BackupPC 4
>- You have no BackupPC 3 backups left (no hardlinks between pc and cpool
>or pool)
>- $Conf{PoolV3Enabled} is off
>
>That will give you a huge performance boost for indexing etc.
>
>Best regards,
>Johan Ehnberg
>
>--
>Johan Ehnberg
>Founder, CEO
>jo...@molnix.com
>+358503209688
>
>Molnix Oy
>molnix.com
>

Interesting discussion. I also put the xfs journal on SSD using:
'mkfs.xfs -l size=64m,logdev=/dev/sdb1 /dev/mapper/luks'
where the luks-encrypted partition is in an LVM on platters and sdb1 is an 
Azure "resource" temporary storage.

Regarding the location of the "pc" folder... I think I need to re-visit that 
since I created a symlink from /var/lib/BackupPC and moved the entire contents 
of that folder to my platters. It sounds like I need to undo that to keep the 
"pc" folder on the SSD. Would a better move be to create a symlink just to the 
"cpool" folder? ie:
ln -s /mnt/backup-volume/BackupPC/cpool /var/lib/BackupPC/cpool
and leave "pool" and "pc" on the SSD with the operating system?


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncClientCmd --> RsyncSshArgs

2018-08-13 Thread Mike Hughes
Mystery solved. The defaults included --one-file-system. Removed that and all 
partitions are being backed up 

From: Mike Hughes
Sent: Monday, August 13, 2018 08:54
To: 'General list for user discussion, questions and support' 

Subject: RE: RsyncClientCmd --> RsyncSshArgs

Hi BackupPC users,

Just curious if anyone else has made the changes necessary to use a non-root 
user account in the 4.X versions and run into any difficulty with incomplete 
backups?

Thank you!

From: Mike Hughes
Sent: Friday, August 10, 2018 14:39
To: 
backuppc-users@lists.sourceforge.net<mailto:backuppc-users@lists.sourceforge.net>
Subject: RsyncClientCmd --> RsyncSshArgs

Transitioning from BPC 3.x to 4.x there seem to be some syntactic changes 
regarding rsync & ssh commands. I was able to follow documentation that I think 
lived on SourceForge where it was suggested to configure a non-root account for 
ssh’ing and running rsync under sudo. After some experimentation (and as I 
recall some frustration around “$argList” vs “$argList+”) I came up with the 
following which worked great in version 3.3:

RsyncClientPath: --> /usr/bin/rsync
RsyncClientCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath $argList+
RsyncClientRestoreCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath 
$argList+

I think the only change I made to Rsync[Restore]Args was to add 
--checksum-seed=32761

Now in ver 4.2.1 the variables have changed. This is what I’m able to make work 
so far:

RsyncBackupPCPath: --> /usr/bin/rsync_bpc
RsyncClientPath: --> sudo /usr/bin/rsync
RsyncSshArgs: --> -e
--> $sshPath -l backuppc

However something is definitely off because the backup is incomplete. Many 
files and folders are missing. The most obvious is /home folder:
[cid:image001.jpg@01D432E5.6CA52990]
[caption – the home folder is empty – contains zero users!]

This is the only reference to “home” in my exclusion list:

'/home/windows',

because I want to skip a folder named /home/windows.

Any help appreciated. Since I saw previous requests for this info here is my 
redacted XferErr from the latest run. I also tried removing --protect-args from 
RsyncArgs and added charset = utf-8 to the client’s /etc/rsyncd.conf and 
restarted the service):

Running BackupPC_refCountUpdate -h hostname-loc -f on hostname-loc
Xfer PIDs are now 80158
BackupPC_refCountUpdate: host hostname-loc got 0 errors (took 1 secs)
Xfer PIDs are now
Finished BackupPC_refCountUpdate (running time: 1 sec)
Xfer PIDs are now
XferLOG file /var/lib/BackupPC//pc/hostname-loc/XferLOG.8.z created 2018-08-10 
14:13:12
Backup prep: type = full, case = 6, inPlace = 1, doDuplicate = 0, newBkupNum = 
8, newBkupIdx = 7, lastBkupNum = , lastBkupIdx =  (FillCycle = 0, noFillCnt = 0)
Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name 
hostname-loc --bpc-share-name / --bpc-bkup-num 8 --bpc-bkup-comp 3 
--bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 3790 
--bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ backuppc 
--rsync-path=sudo\ /usr/bin/rsync --super --recursive --protect-args 
--numeric-ids --perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ 
%8U,%8G\ %9l\ %f%L --stats --checksum --timeout=72000 --exclude=\*access_log\* 
--exclude=.apdisk --exclude=cache/page --exclude=/dev --exclude=/share5 
--exclude=.DS_Store --exclude=\*.DS_Store --exclude=\*error_log\* 
--exclude=/home/windows --exclude=\*.lock --exclude=lost+found --exclude=/media 
--exclude=/mnt --exclude=\*modsec_audit\* --exclude=noback --exclude=no_backup 
--exclude=/opt --exclude=php/session --exclude=/proc --exclude=/run 
--exclude=/this_share --exclude=/moreshares --exclude=/sys 
--exclude=.TemporaryItems --exclude=.Thumbs.db --exclude=tiny_mce 
--exclude=tinymce --exclude=/tmp/ --exclude=.tmp --exclude=.Trashes 
--exclude=/var/backup --exclude=/var/cache --exclude=/var/myprog/logs/autossl 
--exclude=/var/empty/sshd --exclude=/var/lib/mlocate 
--exclude=/var/lib/php/session\* --exclude=/var/lib/waagent 
--exclude=/var/lib/yum/yumdb --exclude=/var/log --exclude=/var/run 
--exclude=/var/spool/at --exclude=/var/spool/clientmqueue 
--exclude=/var/spool/exim --exclude=/var/spool/mqueue 
--exclude=/var/spool/postfix --exclude=/var/tmp --exclude=\*.vhd hostname-loc:/ 
/
full backup started for directory /
Xfer PIDs are now 81916
This is the rsync child about to exec /usr/bin/rsync_bpc

NOTICE: Use of this system is protected and monitored

This system is the property of This Company, Inc. and may be accessed only
by authorized users. Unauthorized use may be subject to criminal prosecution.
...

Xfer PIDs are now 81916,81919
x

Re: [BackupPC-users] RsyncClientCmd --> RsyncSshArgs

2018-08-13 Thread Mike Hughes
Hi BackupPC users,

Just curious if anyone else has made the changes necessary to use a non-root 
user account in the 4.X versions and run into any difficulty with incomplete 
backups?

Thank you!

From: Mike Hughes
Sent: Friday, August 10, 2018 14:39
To: backuppc-users@lists.sourceforge.net
Subject: RsyncClientCmd --> RsyncSshArgs

Transitioning from BPC 3.x to 4.x there seem to be some syntactic changes 
regarding rsync & ssh commands. I was able to follow documentation that I think 
lived on SourceForge where it was suggested to configure a non-root account for 
ssh'ing and running rsync under sudo. After some experimentation (and as I 
recall some frustration around "$argList" vs "$argList+") I came up with the 
following which worked great in version 3.3:

RsyncClientPath: --> /usr/bin/rsync
RsyncClientCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath $argList+
RsyncClientRestoreCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath 
$argList+

I think the only change I made to Rsync[Restore]Args was to add 
--checksum-seed=32761

Now in ver 4.2.1 the variables have changed. This is what I'm able to make work 
so far:

RsyncBackupPCPath: --> /usr/bin/rsync_bpc
RsyncClientPath: --> sudo /usr/bin/rsync
RsyncSshArgs: --> -e
--> $sshPath -l backuppc

However something is definitely off because the backup is incomplete. Many 
files and folders are missing. The most obvious is /home folder:
[cid:image001.jpg@01D432E3.351E9A30]
[caption - the home folder is empty - contains zero users!]

This is the only reference to "home" in my exclusion list:

'/home/windows',

because I want to skip a folder named /home/windows.

Any help appreciated. Since I saw previous requests for this info here is my 
redacted XferErr from the latest run. I also tried removing --protect-args from 
RsyncArgs and added charset = utf-8 to the client's /etc/rsyncd.conf and 
restarted the service):

Running BackupPC_refCountUpdate -h hostname-loc -f on hostname-loc
Xfer PIDs are now 80158
BackupPC_refCountUpdate: host hostname-loc got 0 errors (took 1 secs)
Xfer PIDs are now
Finished BackupPC_refCountUpdate (running time: 1 sec)
Xfer PIDs are now
XferLOG file /var/lib/BackupPC//pc/hostname-loc/XferLOG.8.z created 2018-08-10 
14:13:12
Backup prep: type = full, case = 6, inPlace = 1, doDuplicate = 0, newBkupNum = 
8, newBkupIdx = 7, lastBkupNum = , lastBkupIdx =  (FillCycle = 0, noFillCnt = 0)
Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name 
hostname-loc --bpc-share-name / --bpc-bkup-num 8 --bpc-bkup-comp 3 
--bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 3790 
--bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ backuppc 
--rsync-path=sudo\ /usr/bin/rsync --super --recursive --protect-args 
--numeric-ids --perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ 
%8U,%8G\ %9l\ %f%L --stats --checksum --timeout=72000 --exclude=\*access_log\* 
--exclude=.apdisk --exclude=cache/page --exclude=/dev --exclude=/share5 
--exclude=.DS_Store --exclude=\*.DS_Store --exclude=\*error_log\* 
--exclude=/home/windows --exclude=\*.lock --exclude=lost+found --exclude=/media 
--exclude=/mnt --exclude=\*modsec_audit\* --exclude=noback --exclude=no_backup 
--exclude=/opt --exclude=php/session --exclude=/proc --exclude=/run 
--exclude=/this_share --exclude=/moreshares --exclude=/sys 
--exclude=.TemporaryItems --exclude=.Thumbs.db --exclude=tiny_mce 
--exclude=tinymce --exclude=/tmp/ --exclude=.tmp --exclude=.Trashes 
--exclude=/var/backup --exclude=/var/cache --exclude=/var/myprog/logs/autossl 
--exclude=/var/empty/sshd --exclude=/var/lib/mlocate 
--exclude=/var/lib/php/session\* --exclude=/var/lib/waagent 
--exclude=/var/lib/yum/yumdb --exclude=/var/log --exclude=/var/run 
--exclude=/var/spool/at --exclude=/var/spool/clientmqueue 
--exclude=/var/spool/exim --exclude=/var/spool/mqueue 
--exclude=/var/spool/postfix --exclude=/var/tmp --exclude=\*.vhd hostname-loc:/ 
/
full backup started for directory /
Xfer PIDs are now 81916
This is the rsync child about to exec /usr/bin/rsync_bpc

NOTICE: Use of this system is protected and monitored

This system is the property of This Company, Inc. and may be accessed only
by authorized users. Unauthorized use may be subject to criminal prosecution.
...

Xfer PIDs are now 81916,81919
xferPids 81916,81919
[ skipped 8 lines ]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 4 filesNew, 14085 sizeNew, 4510 sizeNewComp, 3791 inode

Number of files: 2666
Number of files transferred: 4
Total file size: 30521825 bytes
Total transferred file size: 140

[BackupPC-users] RsyncClientCmd --> RsyncSshArgs

2018-08-10 Thread Mike Hughes
Transitioning from BPC 3.x to 4.x there seem to be some syntactic changes 
regarding rsync & ssh commands. I was able to follow documentation that I think 
lived on SourceForge where it was suggested to configure a non-root account for 
ssh'ing and running rsync under sudo. After some experimentation (and as I 
recall some frustration around "$argList" vs "$argList+") I came up with the 
following which worked great in version 3.3:

RsyncClientPath: --> /usr/bin/rsync
RsyncClientCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath $argList+
RsyncClientRestoreCmd: --> $sshPath -q -x -l backuppc $host sudo $rsyncPath 
$argList+

I think the only change I made to Rsync[Restore]Args was to add 
--checksum-seed=32761

Now in ver 4.2.1 the variables have changed. This is what I'm able to make work 
so far:

RsyncBackupPCPath: --> /usr/bin/rsync_bpc
RsyncClientPath: --> sudo /usr/bin/rsync
RsyncSshArgs: --> -e
--> $sshPath -l backuppc

However something is definitely off because the backup is incomplete. Many 
files and folders are missing. The most obvious is /home folder:
[cid:image003.jpg@01D430B7.D1455F80]
[caption - the home folder is empty - contains zero users!]

This is the only reference to "home" in my exclusion list:

'/home/windows',

because I want to skip a folder named /home/windows.

Any help appreciated. Since I saw previous requests for this info here is my 
redacted XferErr from the latest run. I also tried removing --protect-args from 
RsyncArgs and added charset = utf-8 to the client's /etc/rsyncd.conf and 
restarted the service):

Running BackupPC_refCountUpdate -h hostname-loc -f on hostname-loc
Xfer PIDs are now 80158
BackupPC_refCountUpdate: host hostname-loc got 0 errors (took 1 secs)
Xfer PIDs are now
Finished BackupPC_refCountUpdate (running time: 1 sec)
Xfer PIDs are now
XferLOG file /var/lib/BackupPC//pc/hostname-loc/XferLOG.8.z created 2018-08-10 
14:13:12
Backup prep: type = full, case = 6, inPlace = 1, doDuplicate = 0, newBkupNum = 
8, newBkupIdx = 7, lastBkupNum = , lastBkupIdx =  (FillCycle = 0, noFillCnt = 0)
Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/BackupPC/ --bpc-host-name 
hostname-loc --bpc-share-name / --bpc-bkup-num 8 --bpc-bkup-comp 3 
--bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 3790 
--bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ backuppc 
--rsync-path=sudo\ /usr/bin/rsync --super --recursive --protect-args 
--numeric-ids --perms --owner --group -D --times --links --hard-links --delete 
--delete-excluded --one-file-system --partial --log-format=log:\ %o\ %i\ %B\ 
%8U,%8G\ %9l\ %f%L --stats --checksum --timeout=72000 --exclude=\*access_log\* 
--exclude=.apdisk --exclude=cache/page --exclude=/dev --exclude=/share5 
--exclude=.DS_Store --exclude=\*.DS_Store --exclude=\*error_log\* 
--exclude=/home/windows --exclude=\*.lock --exclude=lost+found --exclude=/media 
--exclude=/mnt --exclude=\*modsec_audit\* --exclude=noback --exclude=no_backup 
--exclude=/opt --exclude=php/session --exclude=/proc --exclude=/run 
--exclude=/this_share --exclude=/moreshares --exclude=/sys 
--exclude=.TemporaryItems --exclude=.Thumbs.db --exclude=tiny_mce 
--exclude=tinymce --exclude=/tmp/ --exclude=.tmp --exclude=.Trashes 
--exclude=/var/backup --exclude=/var/cache --exclude=/var/myprog/logs/autossl 
--exclude=/var/empty/sshd --exclude=/var/lib/mlocate 
--exclude=/var/lib/php/session\* --exclude=/var/lib/waagent 
--exclude=/var/lib/yum/yumdb --exclude=/var/log --exclude=/var/run 
--exclude=/var/spool/at --exclude=/var/spool/clientmqueue 
--exclude=/var/spool/exim --exclude=/var/spool/mqueue 
--exclude=/var/spool/postfix --exclude=/var/tmp --exclude=\*.vhd hostname-loc:/ 
/
full backup started for directory /
Xfer PIDs are now 81916
This is the rsync child about to exec /usr/bin/rsync_bpc

NOTICE: Use of this system is protected and monitored

This system is the property of This Company, Inc. and may be accessed only
by authorized users. Unauthorized use may be subject to criminal prosecution.
...

Xfer PIDs are now 81916,81919
xferPids 81916,81919
[ skipped 8 lines ]
Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 4 filesNew, 14085 sizeNew, 4510 sizeNewComp, 3791 inode

Number of files: 2666
Number of files transferred: 4
Total file size: 30521825 bytes
Total transferred file size: 14085 bytes
Literal data: 5631 bytes
Matched data: 8454 bytes
File list size: 79264
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 4467
Total bytes received: 87346
sent 4467 bytes  received 87346 bytes  183626.00 bytes/sec
total size is 30521825  speedup is 332.43
DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 2023 filesTotal, 

Re: [BackupPC-users] installation help

2018-08-09 Thread Mike Hughes
Yes, this worked great!

yum install yum-plugin-copr
yum copr enable hobbes1069/BackupPC
yum install backuppc

Thank you both!

>-Original Message-
>From: Tim Herren 
>Sent: Thursday, August 9, 2018 02:33
>To: backuppc-users@lists.sourceforge.net
>Subject: Re: [BackupPC-users] installation help
>
>We are using the packaged BackupPC for RHEL/ CentOS by hobbes
>https://copr.fedorainfracloud.org/coprs/hobbes1069/BackupPC/
>
>successfully on several servers.
>I would higly suggest to give it a try.
>
>Tim
>
>On 09.08.2018 06:29, Craig Barratt via BackupPC-users wrote:
>> Unfortunately the package support for BackupPC 4.x isn't very
>> comprehensive, but there are quite a few available.  I appreciate the work
>> people do keeping various packages up to date (I don't support any
>> myself).  For example, see this issue
>> <https://github.com/backuppc/backuppc/issues/127> for RHEL/CentOS rpms.
>>
>> If you decide to install manually, I would recommend downloading the latest
>> releases, rather than using git head (which is usually in a working state,
>> but might not be).  Look in the release page for each project to download
>> the latest tarballs.  That will allow you to skip the first few steps.
>> But, yes, you still need a working compiler environment to build rsync-bpc
>> and backuppc-xs.
>>
>> Craig
>>
>> On Wed, Aug 8, 2018 at 11:34 AM Mike Hughes  wrote:
>>
>>> LOL! I do not want to build everything from source! I’m just following the
>>> documentation as I’ve been able to find it. You suggested a script which
>>> installs on Debian-based systems so I’m trying to follow those steps on
>>> CentOS.
>>>
>>> I’ve been running V 3.3.1 from EPEL for a couple years and am ready to
>>> move to 4.x. If there is a way to install it without building it from
>>> scratch I am all ears! I must have missed the memo!
>>>
>>>
>>>
>>> *From:* Craig Barratt via BackupPC-users <
>>> backuppc-users@lists.sourceforge.net>
>>> *Sent:* Wednesday, August 8, 2018 13:26
>>> *To:* General list for user discussion, questions and support <
>>> backuppc-users@lists.sourceforge.net>
>>> *Cc:* Craig Barratt 
>>> *Subject:* Re: [BackupPC-users] installation help
>>>
>>>
>>>
>>> Mike,
>>>
>>>
>>>
>>> First, I would strongly recommend using the rsync-bpc 3.0.9 branch.  The
>>> head is based on rsync 3.1.2 but it hasn't seen as much testing and hasn't
>>> been released.  You should do a "git checkout 3.0.9" before running
>>> ./configure, eg:
>>>
>>>
>>>
>>> git clone https://github.com/backuppc/rsync-bpc.git
>>>
>>> cd rsync-bpc
>>>
>>> git checkout 3.0.9
>>>
>>> ./configure
>>>
>>> make
>>>
>>>
>>>
>>> Second, why do you actually want to build everything from source?
>>>
>>>
>>>
>>> I don't have access to a Centos 7 box to test building.  However, I
>>> suspect you'll need to install a libacl-devel package (or similar) to get
>>> the C headers etc.  Many linux libraries have two packages - the runtime
>>> library, and another (generally with "devel" in the name) for compiling and
>>> linking code.
>>>
>>>
>>>
>>> Craig
>>>
>>>
>>>
>>> On Wed, Aug 8, 2018 at 10:19 AM Mike Hughes  wrote:
>>>
>>> Thanks for the reply. I’m building on CentOS7 so the example is partly
>>> useful but there are several changes. Here are some basic steps in case
>>> someone else attempts this:
>>>
>>> yum install git build-essential perl-devel perl-CPAN perl-CGI gcc httpd
>>> mod_ssl glusterfs-client
>>>
>>> cpan [accept all defaults]
>>>
>>> install Test::More
>>>
>>> quit
>>>
>>> reboot (might not be necessary)
>>>
>>> git clone https://github.com/backuppc/backuppc.git
>>>
>>> git clone https://github.com/backuppc/backuppc-xs.git
>>>
>>> git clone https://github.com/backuppc/rsync-bpc.git
>>>
>>> cd backuppc-xs/
>>>
>>> perl Makefile.PL
>>>
>>> make
>>>
>>> make test
>>>
>>> make install
>>>
>>>
>>>
>>> cd ../rsync-bpc
>>>
>>> ./configure
>>>
>>> make
>>>
>>>
>>

Re: [BackupPC-users] installation help

2018-08-08 Thread Mike Hughes
LOL! I do not want to build everything from source! I’m just following the 
documentation as I’ve been able to find it. You suggested a script which 
installs on Debian-based systems so I’m trying to follow those steps on CentOS.
I’ve been running V 3.3.1 from EPEL for a couple years and am ready to move to 
4.x. If there is a way to install it without building it from scratch I am all 
ears! I must have missed the memo!

From: Craig Barratt via BackupPC-users 
Sent: Wednesday, August 8, 2018 13:26
To: General list for user discussion, questions and support 

Cc: Craig Barratt 
Subject: Re: [BackupPC-users] installation help

Mike,

First, I would strongly recommend using the rsync-bpc 3.0.9 branch.  The head 
is based on rsync 3.1.2 but it hasn't seen as much testing and hasn't been 
released.  You should do a "git checkout 3.0.9" before running ./configure, eg:

git clone https://github.com/backuppc/rsync-bpc.git
cd rsync-bpc
git checkout 3.0.9
./configure
make

Second, why do you actually want to build everything from source?

I don't have access to a Centos 7 box to test building.  However, I suspect 
you'll need to install a libacl-devel package (or similar) to get the C headers 
etc.  Many linux libraries have two packages - the runtime library, and another 
(generally with "devel" in the name) for compiling and linking code.

Craig

On Wed, Aug 8, 2018 at 10:19 AM Mike Hughes 
mailto:m...@visionary.com>> wrote:
Thanks for the reply. I’m building on CentOS7 so the example is partly useful 
but there are several changes. Here are some basic steps in case someone else 
attempts this:
yum install git build-essential perl-devel perl-CPAN perl-CGI gcc httpd mod_ssl 
glusterfs-client
cpan [accept all defaults]
install Test::More
quit
reboot (might not be necessary)
git clone https://github.com/backuppc/backuppc.git
git clone https://github.com/backuppc/backuppc-xs.git
git clone https://github.com/backuppc/rsync-bpc.git
cd backuppc-xs/
perl Makefile.PL
make
make test
make install

cd ../rsync-bpc
./configure
make

…and this is where it fails with:
lib/sysacls.c:2761:2: error: #error No ACL functions defined for this platform!
#error No ACL functions defined for this platform!
  ^
make: *** [lib/sysacls.o] Error 1

I’m not sure if this is helpful but acl’s (Access Control Lists) are supported 
on this system:

[Cent-7:root@hostname rsync-bpc]# yum install acl
Package acl-2.2.51-14.el7.x86_64 already installed and latest version
Nothing to do

[Cent-7:root@hostname rsync-bpc]# cat /boot/config-3.10.0-862.9.1.el7.x86_64 | 
grep _ACL
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_FS_POSIX_ACL=y
CONFIG_GENERIC_ACL=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFS_ACL_SUPPORT=m
CONFIG_CEPH_FS_POSIX_ACL=y
CONFIG_CIFS_ACL=y

Am I making a mistake in installing the current version? Perhaps I’d have 
better results installing the version identified in the Debian-based script?
Thank you!
From: Craig Barratt via BackupPC-users 
mailto:backuppc-users@lists.sourceforge.net>>
Sent: Wednesday, August 8, 2018 00:37
To: 
backuppc-users@lists.sourceforge.net<mailto:backuppc-users@lists.sourceforge.net>
Cc: Craig Barratt 
mailto:cbarr...@users.sourceforge.net>>
Subject: Re: [BackupPC-users] installation help

Mike,

You have to build and install backuppc-xs first.  Also, makeDist needs a 
--version argument.

The wiki has an example 
script<https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS>
 for building from git.

Craig

On Tue, Aug 7, 2018 at 10:26 PM Mike Hughes 
mailto:m...@visionary.com>> wrote:
I am new to git-based installations and need some help. I cloned the three 
projects and I cd into the backuppc folder to run perl 
configure.pl<http://configure.pl> and it replies with:
[Cent-7:root@hostname backuppc]# perl configure.pl<http://configure.pl>
You need to run makeDist first to create a tarball release that includes an
updated configure.pl<http://configure.pl>.  After you unpack the tarball, run 
configure.pl<http://configure.pl> from
there.

Not sure why I’d need to generate a tarball but when I run makeDist I get:
[Cent-7:root@ hostname backuppc]# perl makeDist
Can't locate BackupPC/XS.pm in @INC (@INC contains: ./lib 
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at 
lib/BackupPC/Lib.pm line 51.
BEGIN failed--compilation aborted at lib/BackupPC/Lib.pm line 51.
Compilation failed in require at makeDist line 59.
BEGIN failed--compilation aborted at makeDist line 59.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! 
http://sdm.link/slashdot

Re: [BackupPC-users] installation help

2018-08-08 Thread Mike Hughes
Thanks for the reply. I’m building on CentOS7 so the example is partly useful 
but there are several changes. Here are some basic steps in case someone else 
attempts this:
yum install git build-essential perl-devel perl-CPAN perl-CGI gcc httpd mod_ssl 
glusterfs-client
cpan [accept all defaults]
install Test::More
quit
reboot (might not be necessary)
git clone https://github.com/backuppc/backuppc.git
git clone https://github.com/backuppc/backuppc-xs.git
git clone https://github.com/backuppc/rsync-bpc.git
cd backuppc-xs/
perl Makefile.PL
make
make test
make install

cd ../rsync-bpc
./configure
make

…and this is where it fails with:
lib/sysacls.c:2761:2: error: #error No ACL functions defined for this platform!
#error No ACL functions defined for this platform!
  ^
make: *** [lib/sysacls.o] Error 1

I’m not sure if this is helpful but acl’s (Access Control Lists) are supported 
on this system:

[Cent-7:root@hostname rsync-bpc]# yum install acl
Package acl-2.2.51-14.el7.x86_64 already installed and latest version
Nothing to do

[Cent-7:root@hostname rsync-bpc]# cat /boot/config-3.10.0-862.9.1.el7.x86_64 | 
grep _ACL
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_BTRFS_FS_POSIX_ACL=y
CONFIG_FS_POSIX_ACL=y
CONFIG_GENERIC_ACL=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFS_ACL_SUPPORT=m
CONFIG_CEPH_FS_POSIX_ACL=y
CONFIG_CIFS_ACL=y

Am I making a mistake in installing the current version? Perhaps I’d have 
better results installing the version identified in the Debian-based script?
Thank you!
From: Craig Barratt via BackupPC-users 
Sent: Wednesday, August 8, 2018 00:37
To: backuppc-users@lists.sourceforge.net
Cc: Craig Barratt 
Subject: Re: [BackupPC-users] installation help

Mike,

You have to build and install backuppc-xs first.  Also, makeDist needs a 
--version argument.

The wiki has an example 
script<https://github.com/backuppc/backuppc/wiki/Installing-BackupPC-4-from-git-on-Ubuntu-Xenial-16.04-LTS>
 for building from git.

Craig

On Tue, Aug 7, 2018 at 10:26 PM Mike Hughes 
mailto:m...@visionary.com>> wrote:
I am new to git-based installations and need some help. I cloned the three 
projects and I cd into the backuppc folder to run perl 
configure.pl<http://configure.pl> and it replies with:
[Cent-7:root@hostname backuppc]# perl configure.pl<http://configure.pl>
You need to run makeDist first to create a tarball release that includes an
updated configure.pl<http://configure.pl>.  After you unpack the tarball, run 
configure.pl<http://configure.pl> from
there.

Not sure why I’d need to generate a tarball but when I run makeDist I get:
[Cent-7:root@ hostname backuppc]# perl makeDist
Can't locate BackupPC/XS.pm in @INC (@INC contains: ./lib 
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at 
lib/BackupPC/Lib.pm line 51.
BEGIN failed--compilation aborted at lib/BackupPC/Lib.pm line 51.
Compilation failed in require at makeDist line 59.
BEGIN failed--compilation aborted at makeDist line 59.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! 
http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] installation help

2018-08-07 Thread Mike Hughes
I am new to git-based installations and need some help. I cloned the three 
projects and I cd into the backuppc folder to run perl configure.pl and it 
replies with:
[Cent-7:root@hostname backuppc]# perl configure.pl
You need to run makeDist first to create a tarball release that includes an
updated configure.pl.  After you unpack the tarball, run configure.pl from
there.

Not sure why I'd need to generate a tarball but when I run makeDist I get:
[Cent-7:root@ hostname backuppc]# perl makeDist
Can't locate BackupPC/XS.pm in @INC (@INC contains: ./lib 
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at 
lib/BackupPC/Lib.pm line 51.
BEGIN failed--compilation aborted at lib/BackupPC/Lib.pm line 51.
Compilation failed in require at makeDist line 59.
BEGIN failed--compilation aborted at makeDist line 59.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/