Re: [BackupPC-users] Missing backup files

2022-11-04 Thread Adam Goryachev via BackupPC-users



On 5/11/2022 01:04, G.W. Haywood via BackupPC-users wrote:

Hi there,

On Fri, 4 Nov 2022, Mark Murawski wrote:


...
This is the most recently finished full backup [51] for /etc/ssl/private
...
There's no files in there!! Just directories!? Everything is missing

And it looks like the *entire* backup system looks like this.? I didn't
even know that my backups are completely broken and missing all files.


Incidentally I'm not sure that I'd want the 'backuppc' user to be able
to read private data normally only readable by root, but it's your call
and it might even be that you have it set up that way - I don't know.
FTAOD I'm just trying to help.


I just had to comment here

I don't understand why you would NOT want backuppc to have at least read 
access to ALL data, including data only accessible to root. I assume you 
would not be suggesting that you run a separate backup system for each 
user, so why would you want to either:


1) Not backup root data
2) Run a separate backup solution just for root data

I guess this will go back to how you setup your data security etc, but 
regardless of what you do, I would strongly suggest you ensure ALL data 
is backed up (because it is always the unimportant file that needs to be 
restored most urgently and is critical).


So, for my, I use SSH + rsync to backup ALL target systems, and do that 
using the root user on the destination, and I simply use the same method 
for localhost.


As for advice, definitely, test your backups, make sure they work, 
verify by restoring some large enough sample of files and comparing the 
actual content matches what you would expect. One neat "feature request" 
would be to have BPC perform a "verify" where it would simply show all 
files that have changed since the last backup, ie, it does everything 
except adding changed/new files to the pool.



So, while I haven't followed the whole thread, consider posting your log 
and/or config for the host in question, along with output such as:


$ sudo ls -ld /etc /etc/ssl /etc/ssl/private /etc/ssl/private/*

Then we could provide additional guidance/suggestions.

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] When does compression and de-duplication happen?

2022-09-19 Thread Adam Goryachev via BackupPC-users
It depends on the version of backuppc either v3 or v4 as to the exact 
sequence of events, but in either case, the files are processed one at a 
time as they are received, so if there is an existing file from another 
host in the pool, then that file will only require additional space 
during the transfer of the file (I think BPC v4 with rsync will avoid 
transferring the file as well).


On 19/9/2022 18:20, Kenneth Porter wrote:

When backing up a new system that's similar to an existing system, do 
I need enough space on the backup media for the entire new system, or 
just what's different? Will the entire client be pulled over and then 
de-duped, or does that happen as each file is pulled, comparing it to 
what's already in the pool? - 



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-18 Thread Adam Goryachev via BackupPC-users


On 17/2/2022 23:43, Bruno Rogério Fernandes wrote:

Maybe I've got a solution

Instead of modifying backuppc behavior, I'm planning to disable 
compression setting at the server and create a FUSE filesystem that 
transparently compresses all the images using jpeg-xl format and put 
backuppc pool on top of that.


The only problem I can think of is that every time backuppc has to do 
some reading the FUSE will also need to decompress images on the fly. 
I have to do some testing because my server is not much powerful, just 
a dual-core system. 


I was thinking of something sort of similar

Why not use a fuse filesystem on the client, which acts as a kind of 
overlay All directory operations are transparently passed through to 
the native storage location. Reads/writes however are filtered by the 
"compression" before being transferred to the server. The saved bytes at 
the end are converted to null which keeps the file length the same as 
the server expects, but will compress well with pretty much any 
compression algorithm.


By not modifying the directory information, all the rsync comparisons 
will work without any modification. There is no added load for backuppc, 
and in addition, there is no change to the client when accessing images 
since it would access the real location, not the FUSE mounted version.


Just my thoughts...




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-13 Thread Adam Goryachev via BackupPC-users



On 14/9/21 02:00, Juergen Harms wrote:
This is not the place to fight for being right, but to understand and 
document help for users who hit this kind of problem.


Trying to understand: how do you define separate and different 
profiles ("per-host override configs") for each of your 18 different 
PCs in one single .pl file (i.e. your file at 
/etc/backupp/hostname.pl) ? or do you mean by hostname.pl a list of 
specific files where hostname.pl stands for an enumeration of 18 files 
with PC-specific names?


I suspect he means 18 files with each file representing the specific 
config for that specific host. This is the way BackupPC is designed, 
global config in config.pl and specific config for a single host in 
hostname.pl. There are extensions to this whereby you could have config 
which is specific to a group of hosts in a group.pl and this is 
"included" into each of the hosts within the group, but that is outside 
the scope of this discussion.
If the latter is the case, our disagreement is very small: each of 
these files in /etc/backuppc provides config info for one pc, and the 
pc/ directory does not harm, but is not used (I tried both variants - 
with and without specifyint pc/ - both work)


The "pc" symlink (it's not a directory within the Debian package) is a 
compatibility layer to make the Debian package compatible with the 
standard BackupPC documentation and user expectations outside of the 
Debian community. So if you asked for help on the list, you might be 
advised to create a host config file BPC as etc/pc/hostname.pl, assuming 
you are unaware of any specific details, you might navigate to 
/etc/backuppc/pc/ and create a hostname.pl file. This will work as 
expected. If you were aware, you might navigate to /etc/backuppc and 
create the hostname.pl file, which would have the exact same result 
(working as expected).


IMHO, it would appear that you had some config issue, and because things 
were not working you looked for something to blame, the pc symlink 
looked strange, and so you blamed that (at least that is what I did in 
the past). Once you understand that this is just a massive non-issue and 
totally not relevant to any perceived issue, then you can ignore it and 
move on.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Moving to 4.4.0, sudo on FreeBSD

2021-07-23 Thread Adam Goryachev via BackupPC-users
Sounds to me that you might have restricted your source IP in the *bsd 
.ssh/authorized_keys file. Maybe double check on those restrictions.


Regards,
Adam

On 24/7/21 14:27, Brad Alexander wrote:


I ran across what appears to be the reason for the issue that I am 
having. I found the following issue in my console:


/var/log/console.log:Jul 23 23:52:11 danube kernel: Jul 23 23:52:11 
danube sudo[2
866]: backuppc : command not allowed ; PWD=/usr/home/backuppc ; 
USER=root ; COMMA

ND=/usr/bin/rsync --server --sender -slHogDtprcxe.iLsfxC

I don't quite understand it. It appears that

$Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';

in my config.pl  is overriding

$Conf{RsyncClientPath} = 'sudo /usr/local/bin/rsync';

in my freebsd hosts .pl files. Are per-host config files no longer 
supported? Is there another way to specify the path for the rsync 
command on a per-host or per-OS basis?


Thanks,
--b


On Fri, Jul 23, 2021 at 4:28 PM Brad Alexander > wrote:


I have been running BackupPC 3.x for many years on a Debian Linux
box. I just expanded my TrueNAS box with larger drives, grew my
pool, and am in the process of converting from BackupPC 3.3.1 on
the dedicated server (that has gotten a bit snug on the drive
space) to a 4.4.0 install in a FreeBSD jail on my TrueNAS box,
using the guide at

https://www.truenas.com/community/threads/quickstart-guide-for-backuppc-4-in-a-jail-on-freenas.74080/

,
and another page for the changes needed for rsync. I am backing up
both FreeBSD and Linux boxes.

So at this point, the linux boxes are backing up on the 4.4
installation, but the FreeBSD boxes are not. Both are working on
the 3.3.1 machine. I transferred all of my .pl files
from the old backup box to the 4.4.0 jail, and they are identical
to the old configs. So does anyone have any ideas about what could
be happening? I have a log of an iteration of the backup test at

https://pastebin.com/KLKxGYT1 
It is stopping to ask for a password, which it shouldn't be doing,
unless it is looking for rsync_bpc on the client machines.

Thoughts?

Thanks,
--b



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run VPN script before backup - PingCmd?

2021-07-13 Thread Adam Goryachev via BackupPC-users


On 14/7/21 00:18, Rob Morin wrote:

Wow, ok so that worked!

I put the Connect2Office.sh in the DumpPreUserCmd
It returns a zero as exit status too
And drops the vpn when done with DumpPostUserCmd

image.png

DumpPreUserCmd cript looks like this:

#!/bin/bash
sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
sleep 10
echo $?
/bin/true

DumpPostUserCmd looks like this:

#!/bin/bash
sudo killall openvpn
echo $?
/bin/true

You might consider a better method to ensure you close the "right" 
openvpn tunnel (there could be cases where you would have more than 
one). Usually the simplest would be to have the config on start create a 
pid file, then you can simply kill `cat /run/openvpn/mytunnel.pid`, it 
might also be an idea to confirm that the pid this file points to is 
actually (still) openvpn, but at least it's working for you now. The 
rest are just potential improvements you or someone else in future might 
need/want.

Thanks a bunch Adam.
I hope this helps others!
Have a great day everyone!


Regards,
Adam


On Mon, Jul 12, 2021 at 7:52 PM Adam Goryachev via BackupPC-users 
<mailto:backuppc-users@lists.sourceforge.net>> wrote:



On 13/7/21 05:32, Rob Morin wrote:
> Hello all...
>
> I was looking at a way to start up my vpn from our remote backup
site
> to the office when backuppc starts a job.
>
> I googled around for quite a bit and saw some people were using  a
> script in place of the pingcmd parameter.
>
> I have tried that but i cant get it to work, as well as stop the
> connection when done using the PostDumpCmd.
>
> In a host, where the PingCmd text box is located i entered:
> /usr/local/bin/Connect2Office.sh
> And made sure the check mark was there and I saved it.
>
> The script itself is below, not much at all, really.
>
> #!/bin/bash
> /bin/true
> sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
> echo $?
>
> I added the --daemon in order to put the process in the
> background while running
> /bin/true is there because i thought the exit status had to be
something
> and the echo $? is there for same exit status reason.
>
> The user backuppc is allowed to sudo that script via the sudoers
file.
>
> Now , when I manually run this command as the user backuppc or
> root,  from the command line, all works well, and I can
manually start
> a backup and it completes fine.
>
> However, when I click on the start incremental job from GUI for the
> same host, as a test, the log file simply shows the below and
nothing
> gets backed up.
>
> 2021-07-12 14:49:45 incr backup started back to 2021-07-12 14:33:53
> (backup #0) for directory /etc
>
> Then after several minutes of nothing i dequeue the backup and
get the
> below, which is of course normal.
>
> 2021-07-12 14:51:26 Aborting backup up after signal INT
>
> I am sure I am doing something stupid
> Any help would be appreciated.
>
> Have a great day!
>
>
I'm not sure of using PingCmd for this, but why not use the
DumpPreUserCmd
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_
<http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_>

? The stdout will be sent to the log for you to see what is happening.

As for the script, usually you would run the /bin/true as the last
command so that it will ignore any other exit status and always show
"successful". So based on the current script, that line is pointless
unless you moved it to after the openvpn command.

You might also need to check any capabilities, probably backuppc
doesn't
have NET_CAP or ability to create tunnels etc, so once you are
sure the
script is being run (maybe add a touch /tmp/myscript) then you might
want to define a openvpn log file so you can see what it is doing
and/or
why it fails

You might also need a sleep or some other test to ensure the
tunnel is
actually working/passing traffic, as openvpn will return before the
tunnel is up, and then backuppc will attempt to start the backup.

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
<mailto:BackupPC-users@lists.sourceforge.net>
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
<https://lists.sourceforge.net/lists/listinfo/backuppc-users>
Wiki: https://github.com/backuppc/backuppc/wiki
<https://github.com/backuppc/backuppc/wiki>
Project: https://backuppc.github.io/backuppc/
<http

Re: [BackupPC-users] job queue order

2021-07-12 Thread Adam Goryachev via BackupPC-users



On 13/7/21 09:02, Kenneth Porter wrote:

How can I change the order in the queue?

I just added 18 new "hosts" (actually 6, but with 3 backup jobs per 
host). How can I push them to the front of the queue to initialize 
their first backup? Is there some UI to rearrange the queue order? I 
don't want to force a new job to start running immediately, to avoid 
loading down the network. I just want to make sure those jobs run next.


Pretty sure you could manually start a backup on those hosts you want 
done first, but it will only actually start up to your configured 
$Conf{MaxUserBackups} value. I'm not sure, but suspect that the rest 
will simply be at the "top of the queue"


Regards,
Adam

--



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run VPN script before backup - PingCmd?

2021-07-12 Thread Adam Goryachev via BackupPC-users


On 13/7/21 05:32, Rob Morin wrote:

Hello all...

I was looking at a way to start up my vpn from our remote backup site 
to the office when backuppc starts a job.


I googled around for quite a bit and saw some people were using  a 
script in place of the pingcmd parameter.


I have tried that but i cant get it to work, as well as stop the 
connection when done using the PostDumpCmd.


In a host, where the PingCmd text box is located i entered:
/usr/local/bin/Connect2Office.sh
And made sure the check mark was there and I saved it.

The script itself is below, not much at all, really.

#!/bin/bash
/bin/true
sudo openvpn --daemon --config /etc/openvpn/gateway.hardent.com.ovpn
echo $?

I added the --daemon in order to put the process in the 
background while running

/bin/true is there because i thought the exit status had to be something
and the echo $? is there for same exit status reason.

The user backuppc is allowed to sudo that script via the sudoers file.

Now , when I manually run this command as the user backuppc or 
root,  from the command line, all works well, and I can manually start 
a backup and it completes fine.


However, when I click on the start incremental job from GUI for the 
same host, as a test, the log file simply shows the below and nothing 
gets backed up.


2021-07-12 14:49:45 incr backup started back to 2021-07-12 14:33:53 
(backup #0) for directory /etc


Then after several minutes of nothing i dequeue the backup and get the 
below, which is of course normal.


2021-07-12 14:51:26 Aborting backup up after signal INT

I am sure I am doing something stupid
Any help would be appreciated.

Have a great day!


I'm not sure of using PingCmd for this, but why not use the 
DumpPreUserCmd 
http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_dumppreusercmd_ 
? The stdout will be sent to the log for you to see what is happening.


As for the script, usually you would run the /bin/true as the last 
command so that it will ignore any other exit status and always show 
"successful". So based on the current script, that line is pointless 
unless you moved it to after the openvpn command.


You might also need to check any capabilities, probably backuppc doesn't 
have NET_CAP or ability to create tunnels etc, so once you are sure the 
script is being run (maybe add a touch /tmp/myscript) then you might 
want to define a openvpn log file so you can see what it is doing and/or 
why it fails


You might also need a sleep or some other test to ensure the tunnel is 
actually working/passing traffic, as openvpn will return before the 
tunnel is up, and then backuppc will attempt to start the backup.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problem with WakupSchedule and Backupplan

2021-05-04 Thread Adam Goryachev via BackupPC-users


On 4/5/21 22:00, Ralph Sikau wrote:

Am Mittwoch, den 28.04.2021, 16:30 + schrieb backuppc-
users-requ...@lists.sourceforge.net

:

However, I have a suspicion that you are on the right track.  If a backup is 
missed, then it is put into the queue and it will start as soon as it is able 
to.  You may be able to use the blackout hours to help.

Greg,
I think it would help if I could get answers to these
questions:
1. For how long does the backup system STAY awake after
having been awakened according to the WakeupSchedule?

Until the queue is empty.

2. When a backup starts at 11:30 pm will it go on then over
midnight during the following night or will ist go on hold
at midnight?
A started backup will continue until finished regardless of blackout 
periods etc.

3. Is there a possibility to see what is hold in the backup
queue?

Yes, on the web interface, click "Current Queues"

Maybe you have the answers.


More information:

When backuppc wakes up, it will put all hosts that are due to be backed 
up on the queue (ie, their backup schedule and last backup completed 
times are too far apart). It will then take the number of jobs from the 
queue that your config says it can run in parallel, and starts them. If 
a backup starts but is inside the blackout winder, then it is 
immediately stopped (ie, it never really actually starts the xfer), and 
is removed from the queue. Same if the ping time is too long, or 
whatever other constraint that suggests the backup has failed/can't 
start. The next backup on the queue will then start. Eventually, all 
backups on the queue will complete, and backuppc goes back to sleep.


If the backups took too long, and continued past the next wakup period, 
then all due hosts not already on the queue will be added to the queue. 
This is why you can set wakeup schedule every 5 minutes, without causing 
a problem. The wakeup schedule basically just defines the minimum amount 
of time after a backup becomes due, before the backup will be placed on 
the queue.


Regards,
Adam

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC permission issue?

2021-03-31 Thread Adam Goryachev via BackupPC-users


On 1/4/21 00:08, Joseph Bishay wrote:

Hello Adam and everyone,

Thank you for the reply.  I've responded below:

On Tue, Mar 30, 2021 at 10:42 PM Adam Goryachev via BackupPC-users 
<mailto:backuppc-users@lists.sourceforge.net>> wrote:


On 31/3/21 12:26, Joseph Bishay wrote:



I have BackupPC backing up a Linux client and it appears to only
back up certain files.  The pattern seems to be that if the
directory has permissions of -rw-r--r-- BackupPC can enter, read
the files and back them up correctly, but if the directory has
permissions of drwx-- it creates that directory but cannot
enter and read the files within it.

The error log file shows multiple lines of:
Remote[1]: rsync: opendir "/directory/with/files" failed:
Permission denied (13)

Other parts of the filesystem are being backed up correctly it
appears.  The BackupPC automatically connects as the user
BackupPC on the client and that backupPC user has the ability to
run rsync as root. On the client I have:

$ cat /etc/sudoers.d/backuppc giving:
backuppc ALL=NOPASSWD: /usr/bin/rsync
backuppc ALL=NOPASSWD: /usr/bin/whoami  #added this one for debugging

From BackupPC running the command:
ssh -l backuppc client_IP "whoami"
returns backuppc

and running the command
ssh -l backuppc client_IP "sudo whoami"
returns root

so it seems to be working correctly.

In the client config file on BackupPC, variable is set as:
RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath
$argList+"


Aren't you missing a sudo somewhere in the command? not sure how
you have defined rsyncPath, but that looks like it could be the issue.

Maybe you could post the logs which will show the actual commands
being run after variable expansion.

Regards,
Adam


I am not sure if there should be a sudo somewhere or how that works 
unfortunately - I do not understand this very well.  rsyncClientPath 
is defined as: /usr/bin/rsync  It appears rsync is working since I am 
getting part of the drive backed up, just not certain folders.


The Xferlog file shows:

Contents of file /var/lib/backuppc/pc/client_IP/XferLOG.0.z, modified 
2021-03-28 21:25:06


full backup started for directory /
Running: /usr/bin/ssh -q -x -l backuppc client_IP /usr/bin/rsync 
--server --sender --numeric-ids --perms --owner --group -D --links 
--hard-links --times --block-size=2048 --recursive --ignore-times . /



You are definitely missing an "sudo" in there. If you see what you have, 
you are calling ssh, with some flags "-q -x", using the account backuppc 
(-l backuppc), to login to the remove machine "client_IP" and once 
logged in running /usr/bin/rsync with some options  etc.



ssh -l backuppc client_IP "whoami"
This is the same example you posted above, as you can see, it is running 
as the user backuppc



ssh -l backuppc client_IP "sudo whoami"
As you can see, adding the "sudo" means you are going to end up running 
the command as root.



RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath $argList+"

I would suggest changing this to:

RsyncClientCmd = "$sshPath -q -x -l backuppc $host /usr/bin/sudo 
$rsyncPath $argList+"


Assuming your sudo is in /usr/bin/sudo. To check, login and run:

which sudo

Pretty sure that should solve the permissions problem, although I don't 
use sudo with backuppc, so there could be other issues that I'm not 
aware of.


Regards,
Adam

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC permission issue?

2021-03-30 Thread Adam Goryachev via BackupPC-users


On 31/3/21 12:26, Joseph Bishay wrote:

Hello,

I hope you are all doing very well today.

I have BackupPC backing up a Linux client and it appears to only back 
up certain files.  The pattern seems to be that if the directory has 
permissions of -rw-r--r-- BackupPC can enter, read the files and back 
them up correctly, but if the directory has permissions of drwx-- 
it creates that directory but cannot enter and read the files within it.


The error log file shows multiple lines of:
Remote[1]: rsync: opendir "/directory/with/files" failed: Permission 
denied (13)


Other parts of the filesystem are being backed up correctly it 
appears.  The BackupPC automatically connects as the user BackupPC on 
the client and that backupPC user has the ability to run rsync as 
root.  On the client I have:


$ cat /etc/sudoers.d/backuppc giving:
backuppc ALL=NOPASSWD: /usr/bin/rsync
backuppc ALL=NOPASSWD: /usr/bin/whoami  #added this one for debugging

From BackupPC running the command:
ssh -l backuppc client_IP "whoami"
returns backuppc

and running the command
ssh -l backuppc client_IP "sudo whoami"
returns root

so it seems to be working correctly.

In the client config file on BackupPC, variable is set as:
RsyncClientCmd = "$sshPath -q -x -l backuppc $host $rsyncPath $argList+"

Aren't you missing a sudo somewhere in the command? not sure how you 
have defined rsyncPath, but that looks like it could be the issue.


Maybe you could post the logs which will show the actual commands being 
run after variable expansion.


Regards,
Adam

I am not sure if the issue is a file / directory permission issue, or 
a BackupPC configuration issue, or something else. Any help would be 
greatly appreciated!


Thank you,
Joseph

P.S. I sent this email before to the mailing list but it did not go 
through as I was not a member.  I subscribed and am re-sending it.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects

2021-03-11 Thread Adam Goryachev via BackupPC-users


On 12/3/21 00:03, Dave Sherohman wrote:


If I were to set $Conf{MaxBackups} = 1 for one specific host, how 
would that be handled?  Would it prevent that specific host from 
running backups unless there are no other backups in progress?  Would 
it prevent any other backups from being started before that host 
finished?  Would it do both?  Or is that an inherently-global setting 
that has no effect if set for a single host?


My use-case here is that I've got a lot of linux hosts and a handful 
of windows machines.  The linux hosts work great with standard 
ssh/rsync configuration, no problems there.


The windows machines, on the other hand, are using a windows backuppc 
client that our windows admin found on sourceforge and it's having... 
problems... with handling shadow volumes.  As in it appears to be 
failing to create them, which causes backup runs to take many hours as 
it waits for "device or resource busy" files to time out.  Which ties 
up available slots in the MaxBackups limit and prevents the linux 
machines from being scheduled.


So I'm thinking that it might work to temporarily set the windows 
hosts to MaxBackups = 1, if that would prevent multiple windows hosts 
from running at the same time and free up slots for the linux hosts to 
run.  If it would also prevent linux hosts from running when a windows 
host is in progress, though, then that would just make things worse.


Or is there some other way I could specify "run four backups at once, 
BUT only one of these six can run at a time (alongside three others 
which aren't in that group)"?


I'm pretty sure this has been discussed before, and is not possible. 
However, I would suggest spending a bit more time to resolve the issues 
with the windows server backups. There is an updated set of instructions 
posted recently to the list (check the archives), if you need some help 
to get something working, the list is a great place to ask. Once it 
works, the windows machines will backup equally as well as the Linux ones.


HTH

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users


On 27/2/21 08:23, backu...@kosowsky.org wrote:

Adam Goryachev wrote at about 05:48:56 +1100 on Saturday, February 27, 2021:

  > >
  > >   > I was missing the ClientShareName2Path. I've added that in, but now I
  > >   > get another error:
  > >   >
  > >   > No such NTFS drive 'c:' skipping corresponding shadow setup...
  > >   >     'c' => /cygdrive/c/shadow/c-20210226-234449
  > >   > Eval return value: 1
  > >   >
  > >   > I'm thinking it might be a case sensitive issue, so am waiting for it 
to
  > >   > finish before adjusting the config and retrying:
  > >   > $Conf{RsyncShareName} = [
  > >   >    'C'
  > >   > ];
  > >   > $Conf{ClientShareName2Path} = {
  > >   >      'C' => '/C',
  > >   > };
  > >   >
  > >   > ie, using all capital C instead of the lower case c. Or are there any
  > >   > other hints?
  > >   >
  > > It shouldn't be case sensitive.
  > > And personally, I think I use lower case 'c'
  > >
  > > Tell me what the following commands give:
  > >
  > > # cygpath -u C:
  > > # cygpath -u c:
  > >
  > > # ls $(cygpath -u C:)/..
  > > # ls $(cygpath -u c:)/..
  > >
  > > # mount -m | grep "^C: "
  > > # mount -m | grep "^c: "
  > >
  > Results:
  >
  > $ cygpath -u C:
  > /cygdrive/c
  > $ cygpath -u c:
  > /cygdrive/c
  > $ ls $(cygpath -u C:)/..
  > c  d
  > $ ls $(cygpath -u c:)/..
  > c  d
  > $ mount -m | grep "^C: "
  > $ mount -m | grep "^c: "
  > $ mount -m
  > none /cygdrive cygdrive binary,posix=0,user 0 0
  >
  > $ mount
  > C:/cygwin64/root/bin on /usr/bin type ntfs (binary,auto)
  > C:/cygwin64/root/lib on /usr/lib type ntfs (binary,auto)
  > C:/cygwin64/root on / type ntfs (binary,auto)
  > C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
  > D: on /cygdrive/d type udf (binary,posix=0,user,noumount,auto)
  >
  > So all seem to work with lowercase or uppercase, but for some reason,
  > neither works when from the script.
  >
  > The only "non-standard" thing I've done is all the cygwin tools are
  > installed to C:\cygwin64\root instead of the default which installs them
  > to C:\cygwin64\
  >
  > OK, from re-checking the error and the script, it looks like it's
  > failing because mount -m doesn't show the c: ...
  >

Yup.
On my machine, "mount -m" gives the letter drives...
You could try substituting the following

-if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: \S+ ntfs " <(mount 
-m); then
+if ! [ -d "$(cygpath -u ${I}:)" ] || ! grep -qE "^${I^^}: on \S+ type ntfs " 
<(mount); then


That seems to have fixed it, at least the shadow was created, and backup 
is starting. Will have to wait a while for the backup to complete, but 
looks good so far.


   my $sharenameref=$bpc->{Conf}{ClientShareName2Path};
   foreach my $key (keys %{$sharenameref}) { #Rewrite 
ClientShareName2Path

      $sharenameref->{$key} = "$shadowdir$2-$hosttimestamp$3" if
    $sharenameref->{$key} =~ 
m#^(/cygdrive)?/([a-zA-Z])(/.*)?$#; #Add shadow if letter drive

   }
   print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
Junction created for C:\shadow\C-20210227.133140-keep <<===>> 
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy5\\

   'C' => /cygdrive/c/shadow/C-20210227.133140-keep
Eval return value: 1
__bpc_progress_state__ backup share "C"
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name C --bpc-bkup-num 2 
--bpc-bkup-comp 3 --bpc-bkup-prevnum 1 --bpc-bkup-prevcomp 3 
--bpc-bkup-inode0 608083 --bpc-log-level 1 --bpc-attrib-new -e 
/usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.133140-keep/ /
full backup started for directory C (client path 
/cygdrive/c/shadow/C-20210227.133140-keep)

started full dump, share=C
Xfer PIDs are now 25288
xferPids 25288
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name C 
--bpc-bkup-num 2 --bpc-bkup-comp 3 --bpc-bkup-prevnum 1 
--bpc-bkup-prevcomp 3 --bpc-bkup-inode0 608083 --bpc-log-level 1 
--bpc-attrib-new -e /usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.133140-keep/ /

Xfer PIDs are now 25288,25291
xferPids 25288,25291
xferPids 25288,25291
__bpc_progress_fileCnt__ 1
    new    recv cd+ ---r-x---   328384,  328384 0 .
    same   recv >f..tpog... rwx

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users


On 27/2/21 03:31, backu...@kosowsky.org wrote:

Adam Goryachev via BackupPC-users wrote at about 00:41:40 +1100 on Saturday, 
February 27, 2021:

  > Also, I was thinking it should be possible to have this script in a
  > single file, and then just include or require it for each host, does
  > that work? That would make the config file look a lot cleaner, and
  > updating the script in a single file is better than updating for each host.


Specifically I do things like the following:

my $jhost = $_[1];
$Conf{BlackoutPeriods} = []
 if $jhost =~ /^(machineA|machine[0-9]|othermachine)$/;

if($jhost =~ /^machineA)$/) {
 $Conf{BackupsDisable} = 0; #Scheduled/automatic
}elsif($jhost =~/^ABCD$/) { #Specify hosts to disable
 $Conf{BackupsDisable} = 2; #Disable
}elsif($jhost =~/machine[0-9]*$/) { #
$Conf{BackupsDisable} = 2; #CHANGE TO 1 to enable manual
}   

etc.
Such logic can be continued for any differences between machines...


Ouch, that looks overly complex... it means mixing configs from 
different hosts into the same "script". I'll look into these more 
advanced options after I get the simple version working. I'm thinking 
something as simple as:


require ./windows_shadow.pl;



  > I was missing the ClientShareName2Path. I've added that in, but now I
  > get another error:
  >
  > No such NTFS drive 'c:' skipping corresponding shadow setup...
  >     'c' => /cygdrive/c/shadow/c-20210226-234449
  > Eval return value: 1
  >
  > I'm thinking it might be a case sensitive issue, so am waiting for it to
  > finish before adjusting the config and retrying:
  > $Conf{RsyncShareName} = [
  >    'C'
  > ];
  > $Conf{ClientShareName2Path} = {
  >      'C' => '/C',
  > };
  >
  > ie, using all capital C instead of the lower case c. Or are there any
  > other hints?
  >
It shouldn't be case sensitive.
And personally, I think I use lower case 'c'

Tell me what the following commands give:

# cygpath -u C:
# cygpath -u c:

# ls $(cygpath -u C:)/..
# ls $(cygpath -u c:)/..

# mount -m | grep "^C: "
# mount -m | grep "^c: "


Results:

$ cygpath -u C:
/cygdrive/c
$ cygpath -u c:
/cygdrive/c
$ ls $(cygpath -u C:)/..
c  d
$ ls $(cygpath -u c:)/..
c  d
$ mount -m | grep "^C: "
$ mount -m | grep "^c: "
$ mount -m
none /cygdrive cygdrive binary,posix=0,user 0 0

$ mount
C:/cygwin64/root/bin on /usr/bin type ntfs (binary,auto)
C:/cygwin64/root/lib on /usr/lib type ntfs (binary,auto)
C:/cygwin64/root on / type ntfs (binary,auto)
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
D: on /cygdrive/d type udf (binary,posix=0,user,noumount,auto)

So all seem to work with lowercase or uppercase, but for some reason, 
neither works when from the script.


The only "non-standard" thing I've done is all the cygwin tools are 
installed to C:\cygwin64\root instead of the default which installs them 
to C:\cygwin64\


OK, from re-checking the error and the script, it looks like it's 
failing because mount -m doesn't show the c: ...


Thanks,
Adam


  > I've also updated the script based on the new version you posted
  > recently, though I'm assuming that won't make much difference to this issue.
  >
  > So, nope, that didn't work, I'll post more of the output below. I can
  > manually login to the machine and run the command (from bash shell)
  >
  > $ wmic shadowcopy call create Volume=C:\\
  > Executing (Win32_ShadowCopy)->create()
  > Method execution successful.
  > Out Parameters:
  > instance of __PARAMETERS
  > {
  >      ReturnValue = 0;
  >      ShadowID = "{2EB3E2AF-D099-44BA-8D43-A48B1760C73F}";
  > };
  >
  > So it seems to suggest that it should work, most likely I'm again
  > missing some obvious config, or doing something wrong, but seems it
  > should be pretty close...
  >
  > Config file now has:
  >
  > $Conf{ClientNameAlias} = [
  >    '10.1.1.119'
  > ];
  > $Conf{XferMethod} = 'rsync';
  > $Conf{RsyncdUserName} = 'BackupPC';
  > $Conf{RsyncShareName} = [
  >    'C'
  > ];
  > $Conf{ClientShareName2Path} = {
  >      'C' => '/C',
  > };
  > $Conf{RsyncSshArgs} = [
  >    '-e',
  >    '$sshPath -l BackupPC'
  > ];
  > $Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
  > $Conf{PingMaxMsec} = 100;
  >
  > Plus of course a copy of your script config file, updated today.
  >
  >
  > Backup type: type = full, needs_full = , needs_incr = , lastFullTime =
  > 1614263640, opts{f} = 1, opts{i} = , opts{F} =
  > cmdSystemOrEval: abou

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-26 Thread Adam Goryachev via BackupPC-users
==\ \$8\ \{print\ \$3\}\'\)\"\
\ \ \ \ \ \ SHADOWLINK=\"\$\(cygpath\ -w\ 
${shadowdir}\)\$I-$hosttimestamp\"\

\ \ \ \ \ \ cmd\ /c\ \"mklink\ /j\ \$SHADOWLINK\ \$SHADOWPATH\"\
\ \ \ \ \ \ unset\ SHADOWID\ SHADOWPATH\ SHADOWLINK\
\ \ \ \ \ \ done\
";
 #Run script $bashscript on remote host via ssh
   open(my $out_fh, "|-", "/usr/bin/ssh -q -x -i 
/var/lib/backuppc/.ssh/id_rsa -l BackupPC $args[0]->{hostIP} bash -s") 
or warn "Can't start ssh: $!";

  print $out_fh "$bashscript";
  close $out_fh or warn "Error flushing/closing pipe to ssh: $!";
    ;

   my $sharenameref=$bpc->{Conf}{ClientShareName2Path};
   foreach my $key (keys %{$sharenameref}) { #Rewrite 
ClientShareName2Path

      $sharenameref->{$key} = "$shadowdir$2-$hosttimestamp$3" if
    $sharenameref->{$key} =~ 
m#^(/cygdrive)?/([a-zA-Z])(/.*)?$#; #Add shadow if letter drive

   }
   print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
No such NTFS drive 'C:' skipping corresponding shadow setup...
   'C' => /cygdrive/c/shadow/C-20210227.003013-keep
Eval return value: 1
__bpc_progress_state__ backup share "C"
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name C --bpc-bkup-num 1 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 608082 --bpc-log-level 1 --bpc-attrib-new -e 
/usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.003013-keep/ /
full backup started for directory C (client path 
/cygdrive/c/shadow/C-20210227.003013-keep)

started full dump, share=C
Xfer PIDs are now 4016
xferPids 4016
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name C 
--bpc-bkup-num 1 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 
--bpc-bkup-prevcomp -1 --bpc-bkup-inode0 608082 --bpc-log-level 1 
--bpc-attrib-new -e /usr/bin/ssh\ -l\ BackupPC 
--rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super --recursive 
--protect-args --numeric-ids --perms --owner --group -D --times --links 
--hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/c/shadow/C-20210227.003013-keep/ /
rsync: [sender] change_dir "/cygdrive/c/shadow/C-20210227.003013-keep" 
failed: No such file or directory (2)






Adam Goryachev via BackupPC-users wrote at about 17:04:21 +1100 on Friday, 
February 26, 2021:
  > Hi,
  >
  > I've just setup a new Win10 machine, and thought I'd try this solution
  > to do the backup...
  >
  > So far, I have installed the MS SSH server, using the powershell command
  > line installation method, copied the backuppc ssh public key across,
  > used a powershell script to fix permissions on the file. Confirmed I
  > could login from the backuppc host as a new backuppc user
  > (administrative access).
  >
  > I then downloaded cygwin, ran the setup, and installed rsync plus all
  > other defaults (did not install SSH).
  >
  > I then changed the default SSH shell to bash instead of powershell
  > (registry key).
  >
  > Fixed the PATH variable in the .bashrc to ensure cygwin's /bin was included
  >
  > Copied the below script to my new hosts.pl config file, along with the
  > following host specific config:
  >
  > $Conf{ClientNameAlias} = [
  >    '10.1.1.119'
  > ];
  > $Conf{XferMethod} = 'rsync';
  > $Conf{RsyncdUserName} = 'BackupPC';
  > $Conf{RsyncShareName} = [
  >    '/cygdrive/C/'
  > ];
  > $Conf{RsyncSshArgs} = [
  >    '-e',
  >    '$sshPath -l BackupPC'
  > ];
  > $Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
  > $Conf{PingMaxMsec} = 100;
  > $Conf{BlackoutPeriods} = [];
  >
  > However, when I try to run the backup, I get the following:
  >
  > Executing DumpPreUserCmd: &{sub {
  > #Load variables
  > my $timestamp = "20210226-012400";
  > my $shadowdir = "/cygdrive/c/shadow/";
  > my $shadows = "";
  >
  > my $bashscript = "DAYS=2\
  >
  > etc (cut)
  >
  >print map { "   '$_' => $sharenameref->{$_}
  > &q

Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-25 Thread Adam Goryachev via BackupPC-users

Hi,

I've just setup a new Win10 machine, and thought I'd try this solution 
to do the backup...


So far, I have installed the MS SSH server, using the powershell command 
line installation method, copied the backuppc ssh public key across, 
used a powershell script to fix permissions on the file. Confirmed I 
could login from the backuppc host as a new backuppc user 
(administrative access).


I then downloaded cygwin, ran the setup, and installed rsync plus all 
other defaults (did not install SSH).


I then changed the default SSH shell to bash instead of powershell 
(registry key).


Fixed the PATH variable in the .bashrc to ensure cygwin's /bin was included

Copied the below script to my new hosts.pl config file, along with the 
following host specific config:


$Conf{ClientNameAlias} = [
  '10.1.1.119'
];
$Conf{XferMethod} = 'rsync';
$Conf{RsyncdUserName} = 'BackupPC';
$Conf{RsyncShareName} = [
  '/cygdrive/C/'
];
$Conf{RsyncSshArgs} = [
  '-e',
  '$sshPath -l BackupPC'
];
$Conf{RsyncClientPath} = '/cygdrive/c/cygwin64/root/bin/rsync.exe';
$Conf{PingMaxMsec} = 100;
$Conf{BlackoutPeriods} = [];

However, when I try to run the backup, I get the following:

Executing DumpPreUserCmd: &{sub {
   #Load variables
   my $timestamp = "20210226-012400";
   my $shadowdir = "/cygdrive/c/shadow/";
   my $shadows = "";

   my $bashscript = "DAYS=2\

etc (cut)

  print map { "   '$_' => $sharenameref->{$_}
" } sort(keys %{$sharenameref}) unless $?;
}}
Eval return value: 1
Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
--bpc-host-name hostvm2 --bpc-share-name /cygdrive/C/ --bpc-bkup-num 0 
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 5 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ 
BackupPC --rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/C/ /
full backup started for directory /cygdrive/C/
Xfer PIDs are now 31043
This is the rsync child about to exec /usr/local/bin/rsync_bpc
cmdExecOrEval: about to exec /usr/local/bin/rsync_bpc --bpc-top-dir 
/var/lib/backuppc --bpc-host-name hostvm2 --bpc-share-name /cygdrive/C/ 
--bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
--bpc-bkup-inode0 5 --bpc-log-level 1 --bpc-attrib-new -e /usr/bin/ssh\ -l\ 
BackupPC --rsync-path=/cygdrive/c/cygwin64/root/bin/rsync.exe --super 
--recursive --protect-args --numeric-ids --perms --owner --group -D --times 
--links --hard-links --delete --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ 
%9l\ %f%L --stats --checksum --one-file-system --timeout=72000 
10.1.1.119:/cygdrive/C/ /
Xfer PIDs are now 31043,31172
xferPids 31043,31172
rsync: [sender] send_files failed to open "/cygdrive/C/DumpStack.log.tmp": 
Device or resource busy (16)
newrecv cd+ ---r-x---   328384,  328384 0 .
rsync: [sender] send_files failed to open "/cygdrive/C/hiberfil.sys": Device or 
resource busy (16)
rsync: [sender] send_files failed to open "/cygdrive/C/pagefile.sys": Device or 
resource busy (16)
rsync: [sender] send_files failed to open "/cygdrive/C/swapfile.sys": Device or 
resource busy (16)


As far as I can tell, this would suggest that we are not actually doing 
the backup from the shadow copy... so, good news, I got a full backup of 
the machine (excluding open files), but bad news is I don't know why it 
didn't work.


I can login from the backuppc host as the backuppc user on the windows 
machine, and I can then create a shadow volume and delete it, but not 
sure what else to test, or where to get additional logs from


Any suggestions greatly appreciated

Regards,
Adam

On 26/2/21 07:31, Greg Harris wrote:
Okay, I was just making things way harder than they needed to be. 
 Sorry Jeff.  Doug, from my understanding DeltaCopy is nearly just an 
alternative version of cygwin-rsyncd.  I think all you need to do is 
dump these scripts into the bottom of the .pl file for the host. 
 Otherwise, all of the other setup you normally do should be the same.


Thanks,

Greg Harris

On Feb 23, 2021, at 10:58 AM, backu...@kosowsky.org 
 wrote:


Yes. SSH needs to be minimally configured just as you do when using
the 'rsync' method (over ssh) for any other system.

And SSH is pretty basic for any type of communication, login, file
transfer between machines in the 20th century (with the exception
maybe of pure Windows environments)

Technically, SSH may not be a dependency for rsync in that you can
use 'rsyncd' without SSH but the vast majority of rsync usage between
local and remote machines (with or without backuppc) is over ssh.

Greg Harris wrote at about 15:51:26 + on Tuesday, February 23, 2021:
I was hoping that I could reply with at lea

Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Adam Goryachev via BackupPC-users


On 11/2/21 10:14, Felix Wolters wrote:

Jeff,

I appreciate your detailled discussion of the topic, and I consider your
arguments to be strong.

But this …


Finally, while the sudoer code I shared in my previous note was just
aimed at restricting the sudoer power to rsync with specific flags,
I'm pretty sure that it could be easily expanded to
also limit access to only certain files/directories but just extending
the sudoer line to add the paths desired, thereby further restricting
the reach of the sudo command allowed.

seems to be the critical point to me. Have your tried that? (I haven’t
yet; a quick search at least doesn’t show up manifestations of this
approach.)


At the end of the day, with rrsync, you are still allowing root
access to ssh and that just doesn't feel right.

Well … any time you administrate a remote machine, you gain root access
over ssh to it, so this alone is a danger we use to deal with. On the
other hand, with the rsync-via-sudoers approach – don’t we open rsync to
the full system, so basically an attacker on the currupted server would
be able to basically rsync the whole machine to himself? So, at the end
of the day, aren’t we trading a potential security vulnerability
(rrsync) with a heavy real one (rsync via sudoers)?


It seems that both approaches are adding some security, some of that 
security is overlapping, and some is unique to each approach. If you 
really want to protect as much as possible, why not use both? Have a 
non-root user call sudo which calls rrsync


Based on my minimal understanding that rrsync is simply a script which 
checks the arguments given to the real rsync before calling it.


PPS, also keep in mind that avoiding sudo avoids security complications 
in sudo, as avoiding rrsync avoids potential security bugs in rrsync 
(eg, the ability to exploit argument processing to get remote code 
execution) both of which might have been protected with plain rsync and 
ssh alone.


Just my 0.02c



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Adam Goryachev via BackupPC-users


On 10/2/21 02:56, Felix Wolters wrote:

Hello!

Let me first thank you for providing BackupPC as open source software. I
appreciate it a lot and consider it to be one of the most usefull backup
systems out there!

I’d like to use it with restricted access to the client, so a
potentially corrupted BackupPC server wouldn’t be able to damage the
client machine and data. Using rsync for transfer with a Linux client,
rrsync (restricted rsync – as part of the rsync package) would be a
straigt forward solution to restrict an incoming ssh connection to only
rsync and only a given folder which I will set read only – which would
perfectly do the trick. Unfortunately, this doesn’t seem to work with
BackupPC over rsync, as far as I can see. I’m positive rrsync generally
works on the client as I use it successfully with plain rsync over ssh
on the same machine.

I’ve seen rare information on the internet about this, and it wouldn’t
help me so far.

Thank you for some help or instruction!


Hi Felix,

I'm not familiar with rrsync, but perhaps the first step would be to try 
it and see. If it doesn't work, then include some logs and what debug 
steps you have taken, or other information that might help us to help you.


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Checking rsync progress/speed/status

2021-01-12 Thread Adam Goryachev via BackupPC-users


On 13/1/21 09:21, Les Mikesell wrote:

On Tue, Jan 12, 2021 at 4:15 PM Greg Harris  wrote:

Yeah, that “if you can interpret it” part gets really hard when it looks like:

select(7, [6], [], [6], {tv_sec=60, tv_usec=0}) = 1 (in [6], left {tv_sec=59, 
tv_usec=99})
read(6, 
"\0\200\0\0\4\200\0\7\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
32768) = 27748

Scrolling at 32756 lines in around 30 seconds.

That tells you it is not hung up.  You could grep some 'open's out of
the stream to see what files it is examining.  Sometimes the client
side will do a whole lot of reading before it finds something that
doesn't match what the server already has.


I tend to use something like:

strace -e open -p 

Also:

ls -l /proc//fd

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Imnproving backup speed

2021-01-07 Thread Adam Goryachev via BackupPC-users



On 8/1/21 06:30, Alexander Kobel wrote:

Hi Sorin,

On 1/7/21 9:39 AM, Sorin Srbu wrote:

Hello all!

Trying to improve the backup speed with BPC and looked into setting 
noatime

in fstab.

But this article states some backup programs may bork if noatime is set.

https://lonesysadmin.net/2013/12/08/gain-30-linux-disk-performance-noatime-nodiratime-relatime/ 



What will BPC in particular do if noatime is set?


exactly what it's supposed to do. noatime or at least relatime (or 
perhaps recently lazytime) is the recommended setting:
https://backuppc.github.io/backuppc/BackupPC.html#Optimizations 



I think it depends on whether you are applying this setting change on 
the BPC server, and specifically the BPC pool drive, or if you are 
applying it to the clients and/or root FS of the BPC server.


If you have a separate filesystem for the BPC pool, then using this 
setting on that filesystem will not have any adverse impact, but will 
likely reduce overhead. Changing this setting elsewhere will have the 
documented impacts (and you would need to assess the results of those 
impacts based on your own personal requirements (or provide a lot more 
information for anyone else to comment on).


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc in large environments

2020-12-01 Thread Adam Goryachev via BackupPC-users


On 2/12/20 10:35, G.W. Haywood via BackupPC-users wrote:

Hi there,

On Tue, 1 Dec 2020, backuppc-users-requ...@lists.sourceforge.net wrote:


How big can backuppc reasonably scale?


Remember you can scale vertically or horizontally. Either get a bigger 
machine for your backups, or get more small machines. If you had 3 (or 
more) small machines, you can set 2 to backup each target, this gives 
you some additional redundancy of your backups infrastructure, as long 
as your backup windows can support this, or backups don't add enough 
load to interfere with your daily operations.


I guess at some point using too small machines would be more painful to 
manage, but there are a lot of options for scaling. Most people (vague 
observations) I think just scale vertically and add enough RAM or IO 
performance to handle the load.




... daily backup volume is running around 750 GB per day, with two
database servers providing the majority of that volume (400 GB/day
from one and 150 GB/day from the other).


That's the part which bothers me.  I'm not sure that BackupPC's ways
of checking for changed files marry well with database files.  In a
typical relational database server you'll have some *big* files which
are modified by more or less random accesses.  They will *always* be
changed from the last backup.  The backup of virtual machines is not
dissimilar at the level of the partition image.  You need to stop the
machine to get a consistent backup, or use something like a snapshot.

I just want to second this, my preference is to snapshot the VM (a pre 
backup script from backuppc) and then backup the content of the VM (the 
actual target I use is the SAN server rather than the VM itself). For 
the DB, you should exclude the actual DB files, and have a script 
(either called separately or from BPC pre backup) which will export/dump 
the DB to another consistent file. If possible, this file should be 
uncompressed (allows rsync to better see the unchanged data), and with 
the same filename/path each day (again so rsync/BPC will see this as a 
file with some small amount of changes instead of a massive new file).


If you do that, you might see your daily "changes" reduce compared to 
before.



... I have no idea what to expect the backup server to need in the
way of processing power.


Modest.  I've backed up dozens of Windows workstations and five or six
servers with just a 1.4GHz Celeron which was kicking around after it
was retired from the sales office.  The biggest CPU hog is likely to
be data compression, which you can tune.  Walking directory trees can
cause rsync to use quite a lot of memory.  You might want to look at
something like Icinga/Nagios to keep an eye on things.

FYI, I backup 57 hosts, my current BPC pool size if 7TB, 23M files. Some 
of my backup clients are external on the Internet, some are windows, 
most are linux.


My BPC server has 8G RAM and a quad core CPU:
Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz

As others have said, you are most likely to be IO bound after the first 
couple of backups. You are probably advised to grab a spare machine, 
setup BPC, run a couple of backups against a couple of smaller targets, 
once you have it working (if all goes smoothly, under 2 hours), target a 
larger server, you will soon start to see how it performs in your 
environment, and where the relevant bottlenecks are.


PS, All you need to think about is the CPU requirement to compress 750GB 
per backup cycle (you only need to compress the changed files), and the 
disk IO to write the 750GB (plus a lot of disk IO to do all the 
comparisons, which is probably the main load, which is why you also want 
a lot of RAM to cache the directory trees).


Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Latest backuppc fuse version

2020-11-20 Thread Adam Goryachev via BackupPC-users

Hi,

I was looking on the backuppc github project, but can't seem to find the 
current version of the backuppc fuse program. I only find old versions 
attached to the mailing list.


Can anyone advise where the current version is maintained please?

Regards,
Adam



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] What to do with a restore that's going on far too long?

2020-11-17 Thread Adam Goryachev via BackupPC-users



On 17/11/20 23:47, Adam Hardy wrote:

Which strace output do you monitor to see the process is hung up?
Sorry, I've only little experience with low level stuff.

I usually start with the general strace , if I see the process 
doing "things" then I know it's not truly stuck. Sometimes I will limit 
it to only showing file open, so I can see as it progresses through a 
backup (or restore in your case), or I might use ls -l /proc//fd to 
see which files are currently open/in use, if I see the files changing, 
then I know it's not stuck, and I can get some idea on the progress.


Regards,
Adam



-Original Message-----
From: Adam Goryachev via BackupPC-users <
backuppc-users@lists.sourceforge.net>
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
CC: Adam Goryachev 
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Tue, 17 Nov 2020 22:53:02 +1100

On 17/11/20 22:39, Adam Hardy wrote:

OK, I just saw Raoul's message.

Backuppc_zcat is the tool I need.

Thanks Raoul


Personally, I prefer strace.

Regards,
Adam


-Original Message-
From: Adam Hardy 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] What to do with a restore that's going
on
far too long?
Date: Tue, 17 Nov 2020 10:50:26 +

Thanks Brad but linux complains it's zlib and can't handle it.

adam@gondolin:~$ sudo zcat
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z

gzip: /media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z:
not
in gzip format
adam@gondolin:~$ sudo file
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: zlib
compressed data
adam@gondolin:~$

I can't find a command line tool package with cat or less either :(

What do you do then? Just WTF & kill it?

Cheers
Adam

-Original Message-
From: Brad Alexander 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
Subject: Re: [BackupPC-users] What to do with a restore that's going
on
far too long?
Date: Mon, 16 Nov 2020 23:11:19 -0500

You could try zless or zmore, e.g. zless RestoreLOG.z

--b

On Mon, Nov 16, 2020 at 2:19 PM Adam Hardy <
adam.ha...@cyberspaceroad.com> wrote:

Hi

I'm using 3.3.0 on Linux Mint, to restore to a linux laptop.

I'm trying to access the restore log for a restore that is now
running
for about 12 hours and surely should be done. I can see there's a
substantial RestoreLOG.z but I can't tail it because it's
compressed.

Is there a way?

I'd like to know what it's trying to do before I kill it.

Assuming it is frozen, I'd also appreciate it if someone can tell
me
the best way to kill the job without losing the log.

Thanks!
Adam
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/




___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] What to do with a restore that's going on far too long?

2020-11-17 Thread Adam Goryachev via BackupPC-users



On 17/11/20 22:39, Adam Hardy wrote:

OK, I just saw Raoul's message.

Backuppc_zcat is the tool I need.

Thanks Raoul


Personally, I prefer strace.

Regards,
Adam



-Original Message-
From: Adam Hardy 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Tue, 17 Nov 2020 10:50:26 +

Thanks Brad but linux complains it's zlib and can't handle it.

adam@gondolin:~$ sudo zcat
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z

gzip: /media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: not
in gzip format
adam@gondolin:~$ sudo file
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z
/media/backuppc/usbbackup/backuppc/pc/erebor/RestoreLOG.26.z: zlib
compressed data
adam@gondolin:~$

I can't find a command line tool package with cat or less either :(

What do you do then? Just WTF & kill it?

Cheers
Adam

-Original Message-
From: Brad Alexander 
Reply-To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
To: "General list for user discussion, questions and support" <
backuppc-users@lists.sourceforge.net>
Subject: Re: [BackupPC-users] What to do with a restore that's going on
far too long?
Date: Mon, 16 Nov 2020 23:11:19 -0500

You could try zless or zmore, e.g. zless RestoreLOG.z

--b

On Mon, Nov 16, 2020 at 2:19 PM Adam Hardy <
adam.ha...@cyberspaceroad.com> wrote:

Hi

I'm using 3.3.0 on Linux Mint, to restore to a linux laptop.

I'm trying to access the restore log for a restore that is now
running
for about 12 hours and surely should be done. I can see there's a
substantial RestoreLOG.z but I can't tail it because it's compressed.

Is there a way?

I'd like to know what it's trying to do before I kill it.

Assuming it is frozen, I'd also appreciate it if someone can tell me
the best way to kill the job without losing the log.

Thanks!
Adam
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problems migrating backups from CentOS 6 to CentOS 7

2020-10-14 Thread Adam Goryachev via BackupPC-users
Since both are ext4 filesystems, I'd prefer a dd copy from one to the 
other. See this on some sample command line suggestions on how to do 
this over the network:


https://www.ndchost.com/wiki/server-administration/netcat-over-ssh

Just make sure the source and destination LV is unmounted during the 
copy, but it is likely to be significantly faster (this could be many 
days faster, though if the total filesystem size is only 200G, then it 
may not make that much difference) than the rsync method.


Regards,
Adam



 While trying to transfer the backups in /var/lib/BackupPC from the C6
 to the C7 machine, I run out of space.  On C6, the file system is a
 175 GiB LV in ext4 holding about 137 GB.  On C7, I started with a 200
 GiB LV in ext4 and ran out of space.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/