Re: [BackupPC-users] rsync via ssh/rrsync

2024-05-23 Thread Gandalf Corvotempesta
This is the command in authorized_keys:

command="/usr/bin/rrsync /",restrict,from=

So it should allow rsync to access to the full server, as expected,
but BPC doesn't show any files in the backup

Il giorno gio 23 mag 2024 alle ore 10:24 Gandalf Corvotempesta
 ha scritto:
>
> Il giorno gio 23 mag 2024 alle ore 09:16 Christian Völker via
> BackupPC-users  ha scritto:
> >
> > Well, I guess you'll need to make sure ssh works fine.
> > To do so, go to your backuppc server an switch into the user context of
> > backuppc by "su - backuppc". From there issue "ssh user@hostname" and
> > accept the public key of the target client.
> > Once done, it should run without any problems. As long as you have rsync
> > installed on your target and it is configured to be found in the default
> > path.
>
> Yes, it was an ssh-key issue, backuppc is running from backuppc users, but 
> i've
> transferred the root ssh key, not the backuppc key. Now this is fixed
> and the backup
> is made as expected, in the transfer log I can see the transferred files
>
> BUT there is a huge issue: when browsing the backup just finished,
> backuppc is saying that is empty
> and no files are shown.
>
> Any clou?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync via ssh/rrsync

2024-05-23 Thread Gandalf Corvotempesta
Il giorno gio 23 mag 2024 alle ore 09:16 Christian Völker via
BackupPC-users  ha scritto:
>
> Well, I guess you'll need to make sure ssh works fine.
> To do so, go to your backuppc server an switch into the user context of
> backuppc by "su - backuppc". From there issue "ssh user@hostname" and
> accept the public key of the target client.
> Once done, it should run without any problems. As long as you have rsync
> installed on your target and it is configured to be found in the default
> path.

Yes, it was an ssh-key issue, backuppc is running from backuppc users, but i've
transferred the root ssh key, not the backuppc key. Now this is fixed
and the backup
is made as expected, in the transfer log I can see the transferred files

BUT there is a huge issue: when browsing the backup just finished,
backuppc is saying that is empty
and no files are shown.

Any clou?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] rsync via ssh/rrsync

2024-05-23 Thread Gandalf Corvotempesta
hi guys
im using with great success backuppc backing up clients with rsyncd

for one host i have to move from rsyncd to rsync through ssh (and rrsync on
ssh command) but im hitting tons of different errors

can someone stare a working config for rsync? other that changing the share
name to the path (ie from "everything" as share name to "/") and changing
"rsyncd" to "rsync", everything else is the same?
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Incrementals and Full backups on v4

2020-02-13 Thread Gandalf Corvotempesta
So, this is a perfectly working system:
https://postimg.cc/PCZgN634

with *ALL* backups available totally ?
I'll be able to restore *any* file from *any* backup , in example,
even from the #18 ?

Il giorno gio 13 feb 2020 alle ore 16:07 Michael Huntley
 ha scritto:
>
> Hi Gandalf,
>
> Not with v4.  V4 uses reverse deltas, so your most recent backup is a filled, 
> or complete backup.
>
> V4 calculates the difference between today and yesterday, and so on 
> backwards.  Just think of it as incrementals going  back in time and carrying 
> your full with you each day.   You have a full basket of goodies each day and 
> leave a trail behind you.
>
> You can also have older filled backups to reduce restore time as it lessens 
> the calculations BackupPC must perform.
>
> So, if you have a complete trail of incrementals going back two weeks there 
> is no data loss in that time period.
>
> If I am incorrect in any way in my analogy I am sure the list will correct me 
> and we will both learn.
>
> Kind regards,
>
> mph
>
> > On Feb 13, 2020, at 3:49 AM, Gandalf Corvotempesta 
> >  wrote:
> >
> > Just a confirm:
> >
> > if I have a full backup done on 2020-01-14 (doing 1 full each month)
> > and daily incrementals, keeping up to 14 incrementals, I have data
> > loss ?
> >
> > In example, the incremental done yesterday (2020-02-12), is relative
> > to the incremental done on 2020-01-14 ?
> >
> > How does it work, exactly ? I have some "broken chain" ?
> >
> > With bacula, in example, I need at least 1 full backup and then each
> > incremental after it, to restore from yesterday. When using
> > differentials, I need 1 full, all differential, and all incrementals
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Incrementals and Full backups on v4

2020-02-13 Thread Gandalf Corvotempesta
Just a confirm:

if I have a full backup done on 2020-01-14 (doing 1 full each month)
and daily incrementals, keeping up to 14 incrementals, I have data
loss ?

In example, the incremental done yesterday (2020-02-12), is relative
to the incremental done on 2020-01-14 ?

How does it work, exactly ? I have some "broken chain" ?

With bacula, in example, I need at least 1 full backup and then each
incremental after it, to restore from yesterday. When using
differentials, I need 1 full, all differential, and all incrementals


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Multiple share, same server, different schedules

2020-02-12 Thread Gandalf Corvotempesta
Hi to all.
I'm using BPCv4 to backup aroute 50 hosts.
These hosts have the same rsync configuration: 1 share and same schedule

One of these hosts, has 2 shares. One should be backupeed as usual,
the other one, should be backupped every hour.

Which is the best way to do this ?
I'm thinking to add a "custom" host on backuppc server, in example:
"share2-primaryserver.tld" with it's own configuration, but I don't
like this idea.

Any better way ?
Obviously, these 2 shared should be threaterd differently, each one
with it's own schedule and expire time, only the server is the same


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC opinion

2020-01-17 Thread Gandalf Corvotempesta
Just a useless email but.. BackupPC is *BY FAR* the best backup software
ever made.

I'm using it with success (but with some issues) on a very old and slow server
and, starting from 2 weeks ago, even on a brand new and much faster server.

Absolutely superb.

Maybe I have just one little complain: what happens in case of
disaster on backuppc server?
I won't be able to manually recover a file, right ? rsnapshot is
better in this specific task, as
it will save the verbatim file as coming frm the remote server.
Any disaster recovery procedure available with BackupPC ? A sort of
"TOC" like in cdrom
saying which file is where without having to restore the whole pool ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] XFerLog - same and pool

2020-01-13 Thread Gandalf Corvotempesta
I'm doing a full backup for a new host. This is the very first backup
for this host (the servers already holds other backups)

In XferLog, some files are marked as "pool", I think this happens
thanks to dedup: the
source file was already found in the pool, coming from a different
server, and it was
linked to it.

But what about "same" ? I don't have any prior backup for this server,
so, the file is "same" as what ?
Two identical file (in two different location) backupped at the same
time, could be detected as "same" during the transfer ?

In example:

$ echo "test" > /root/file1
$ cp file1 /tmp/file2

when transferring, /root/file1 will be transferred at first, resulting
in "new" on XferLog.
Then /tmp/file2 is checked, this will be skipped and marked as "same" ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Log flood due to "Botch on admin job for admin : already in use!!"

2019-12-27 Thread Gandalf Corvotempesta
Running multiple nighly jobs will slow down even more and i'm already
using 1/128 as period.
But that's not the issue, the issue is that BPC is flooding the log
writing the same message tens time per second, every time.

Il giorno ven 27 dic 2019 alle ore 10:19 Alfred Weintoegl
 ha scritto:
>
> Maybe you should change the following options?:
>
> $Conf{MaxBackupPCNightlyJobs}
> and
> $Conf{BackupPCNightlyPeriod}
>
> The BackupPC Documentaion says:
> If BackupPC_nightly takes too long to run, the settings
> $Conf{MaxBackupPCNightlyJobs} and $Conf{BackupPCNightlyPeriod} can be
> used to run several BackupPC_nightly processes in parallel, and to split
> its job over several nights.
>
>
> regards
> Alfred
>
>
> Am 27.12.2019 um 09:08 schrieb Gandalf Corvotempesta:
> > When nighly job spawn multiple days (in my case, even 1 week), log are
> > flooded with:
> >
> > Botch on admin job for  admin : already in use!!
> >
> > would be possible to relax this log or add a sort of ratelimit like
> > syslog does ?
> >
> > There is no need to log the same error line 50 times per second, like
> > the following:
> >
> > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> ...
> snip
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rrdUpdate: illegal attempt to update using time

2019-12-27 Thread Gandalf Corvotempesta
2019-12-26 22:44:23 Running BackupPC_rrdUpdate (pid=20275)
2019-12-26 22:44:24  admin-1 : ERROR: /var/log/BackupPC/poolUsage.rrd:
illegal attempt to update using time 1577404800 when last update time
is 1577404800 (minimum one second step)
2019-12-26 22:44:24 Finished  admin-1  (BackupPC_rrdUpdate)


Any better place where to post bugs ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] bad poolRangeStart

2019-12-27 Thread Gandalf Corvotempesta
Got this error:

2019-12-26 22:44:19 Running BackupPC_nightly -m -P 10 864 867 (pid=20264)
2019-12-26 22:44:19  admin : /usr/local/backuppc/bin/BackupPC_nightly:
bad poolRangeStart '864'
2019-12-26 22:44:19 Finished  admin  (BackupPC_nightly -m -P 10 864 867)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Log flood due to "Botch on admin job for admin : already in use!!"

2019-12-27 Thread Gandalf Corvotempesta
When nighly job spawn multiple days (in my case, even 1 week), log are
flooded with:

Botch on admin job for  admin : already in use!!

would be possible to relax this log or add a sort of ratelimit like
syslog does ?

There is no need to log the same error line 50 times per second, like
the following:

2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync backup error

2019-12-26 Thread Gandalf Corvotempesta
Latest for both

Il gio 26 dic 2019, 22:19 Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> ha scritto:

> What versions of BackupPC and rsync-bpc are you using?
>
> Craig
>
> On Thu, Dec 26, 2019 at 8:58 AM Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> I've got this, and the backup is restarted. Any idea ?
>> Both client and server are on a private gigabit lan, no firewall between
>>
>> R
>> bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/jenky/laravel-plupload/tests/.gitkeep):
>> file doesn't exist
>> R
>> bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/kodeine/laravel-acl/src/migrations/.gitkeep):
>> file doesn't exist
>> [ saltate 118 righe ]
>> G
>> bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46)
>> failed; errno = 2
>> G Couldn't unmark candidate matching file
>> /var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46
>> (skipped; errno = 2)
>> G bpc_attribCache_dirWrite: failed to write attributes for dir
>>
>> feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fguestisp/fnews/fsrc/fNews/fviews/attrib
>> G
>> bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520)
>> failed; errno = 2
>> G Couldn't unmark candidate matching file
>> /var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520
>> (skipped; errno = 2)
>> G bpc_attribCache_dirWrite: failed to write attributes for dir
>> feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fjeremeamia/attrib
>> [ saltate 22 righe ]
>> rsync_bpc: [generator] write error: Broken pipe (32)
>> [ saltate 201 righe ]
>> DoneGen: 0 errors, 390 filesExist, 7215 sizeExist, 7215 sizeExistComp,
>> 2002483 filesTotal, 32067721837 sizeTotal, 45 filesNew, 2230 sizeNew,
>> 2230 sizeNewComp, 16624498 inode
>> rsync error: error in socket IO (code 10) at io.c(820) [generator=3.1.2.1]
>> rsync_bpc: [receiver] write error: Broken pipe (32)
>> Done: 0 errors, 5170 filesExist, 57500670 sizeExist, 57500670
>> sizeExistComp, 0 filesTotal, 0 sizeTotal, 9442 filesNew, 2245368408
>> sizeNew, 2245368408 sizeNewComp, 16642117 inode
>> rsync error: received SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1]
>> rsync_bpc exited with fatal status 10 (2560) (rsync error: received
>> SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1])
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync backup error

2019-12-26 Thread Gandalf Corvotempesta
I've got this, and the backup is restarted. Any idea ?
Both client and server are on a private gigabit lan, no firewall between

R 
bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/jenky/laravel-plupload/tests/.gitkeep):
file doesn't exist
R 
bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/kodeine/laravel-acl/src/migrations/.gitkeep):
file doesn't exist
[ saltate 118 righe ]
G 
bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46)
failed; errno = 2
G Couldn't unmark candidate matching file
/var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46
(skipped; errno = 2)
G bpc_attribCache_dirWrite: failed to write attributes for dir
feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fguestisp/fnews/fsrc/fNews/fviews/attrib
G 
bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520)
failed; errno = 2
G Couldn't unmark candidate matching file
/var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520
(skipped; errno = 2)
G bpc_attribCache_dirWrite: failed to write attributes for dir
feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fjeremeamia/attrib
[ saltate 22 righe ]
rsync_bpc: [generator] write error: Broken pipe (32)
[ saltate 201 righe ]
DoneGen: 0 errors, 390 filesExist, 7215 sizeExist, 7215 sizeExistComp,
2002483 filesTotal, 32067721837 sizeTotal, 45 filesNew, 2230 sizeNew,
2230 sizeNewComp, 16624498 inode
rsync error: error in socket IO (code 10) at io.c(820) [generator=3.1.2.1]
rsync_bpc: [receiver] write error: Broken pipe (32)
Done: 0 errors, 5170 filesExist, 57500670 sizeExist, 57500670
sizeExistComp, 0 filesTotal, 0 sizeTotal, 9442 filesNew, 2245368408
sizeNew, 2245368408 sizeNewComp, 16642117 inode
rsync error: received SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1]
rsync_bpc exited with fatal status 10 (2560) (rsync error: received
SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1])


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] question about BackupPCNightlyPeriod, PoolSizeNightlyUpdatePeriod and RefCntFsck

2019-12-22 Thread Gandalf Corvotempesta
If I understood properly, pool cleanup is done at
BackupPCNightlyPeriod, so if I set BackupPCNightlyPeriod to 1, every
night, the whole pool is parsed and unused files are removed.

Is this related with PoolSizeNightlyUpdatePeriod and RefCntFsck ?
In example, can I set a more frequent cleanup process (with
BackupPCNightlyPeriod) and a very very low file size usage with
PoolSizeNightlyUpdatePeriod and RefCntFsck ?

Any drawbacks ?
What if I set BackupPCNightlyPeriod to 32 and
PoolSizeNightlyUpdatePeriod to 128 ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force pool cleanup

2019-12-20 Thread Gandalf Corvotempesta
Can I monitor the progress with that command ?
Any workaround to run that with backuppc stopped ?  I have to free up
as much as possible in the fastest way, this server is very very slow
on its own

Il giorno ven 20 dic 2019 alle ore 10:13 Daniel Berteaud
 ha scritto:
>
> You can use :
>
> sudo -u backuppc /usr/share/BackupPC/bin/BackupPC_serverMesg BackupPC_nightly 
> run
>
> With this, you don't have to stop BackupPC, you just ask BackupPC daemon to 
> start a pool cleanup right now
>
> - Le 19 Déc 19, à 16:08, Gandalf Corvotempesta 
> gandalf.corvotempe...@gmail.com a écrit :
>
> > Hi to all.
> > Any command to run manually to force deletion of "expired" files from
> > pool to free up disk space?
> >
> > I'm running the nightly on a very very low schedule , but right now I
> > have to run it to clean up as much as possible
> >
> > Any hint ?
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
>
> --
> [ https://www.firewall-services.com/ ]
> Daniel Berteaud
> FIREWALL-SERVICES SAS, La sécurité des réseaux
> Société de Services en Logiciels Libres
> Tél : +33.5 56 64 15 32
> Matrix: @dani:fws.fr
> [ https://www.firewall-services.com/ | https://www.firewall-services.com ]
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force pool cleanup

2019-12-19 Thread Gandalf Corvotempesta
Mine takes more tham a week

Il gio 19 dic 2019, 16:50 Mike Hughes  ha scritto:

> Hi Gandolf,
>
> This is what I use to clean up disk space:
> nohup /usr/share/BackupPC/bin/BackupPC_nightly 0 255 &
>
> If I want to watch it work I'll use this:
> tail nohup.out -F
>
> Usually finishes in 5-10 minutes.
>
> --
>
> Mike
>
> On Thu, 2019-12-19 at 16:08 +0100, Gandalf Corvotempesta wrote:
>
> Hi to all.
>
> Any command to run manually to force deletion of "expired" files from
>
> pool to free up disk space?
>
>
> I'm running the nightly on a very very low schedule , but right now I
>
> have to run it to clean up as much as possible
>
>
> Any hint ?
>
>
>
> ___
>
> BackupPC-users mailing list
>
> BackupPC-users@lists.sourceforge.net
>
>
> List:
>
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
>
>
> Wiki:
>
> http://backuppc.wiki.sourceforge.net
>
>
> Project:
>
> http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force pool cleanup

2019-12-19 Thread Gandalf Corvotempesta
Tried with BPC running and stopped, still not working

Il giorno gio 19 dic 2019 alle ore 16:34 Robert Trevellyan
 ha scritto:
>
> Did you stop BackupPC before attempting the command? I believe that's 
> required for most, if not all, command-line operations.
>
> BackupPC_nightly will run all the nightly tasks. If your goal is a more 
> aggressive cleanup than usual, you'll need to adjust your nightly cleanup 
> configuration before stopping BackupPC and running the command.
>
> Check the usage for BackupPC_nightly to find out if you can tell it to skip 
> updating ref counts.
>
> Robert Trevellyan
>
>
> On Thu, Dec 19, 2019 at 10:20 AM Gandalf Corvotempesta 
>  wrote:
>>
>> Il giorno gio 19 dic 2019 alle ore 16:14 Robert Trevellyan
>>  ha scritto:
>> > You can run BackupPC_nightly from the command line.
>>
>> I'm trying with this:
>>
>> su backuppc -c "LC_ALL=C /usr/local/backuppc/bin/BackupPC_nightly -r 0 128"
>>
>> but it will exit immediatly:
>>
>> BackupPC_nightly lock_off
>> log BackupPC_nightly skipping BackupPC_refCountUpdate
>>
>> I don't want to run refCount, just do pool pruning to free up space
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Force pool cleanup

2019-12-19 Thread Gandalf Corvotempesta
Il giorno gio 19 dic 2019 alle ore 16:14 Robert Trevellyan
 ha scritto:
> You can run BackupPC_nightly from the command line.

I'm trying with this:

su backuppc -c "LC_ALL=C /usr/local/backuppc/bin/BackupPC_nightly -r 0 128"

but it will exit immediatly:

BackupPC_nightly lock_off
log BackupPC_nightly skipping BackupPC_refCountUpdate

I don't want to run refCount, just do pool pruning to free up space


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Force pool cleanup

2019-12-19 Thread Gandalf Corvotempesta
Hi to all.
Any command to run manually to force deletion of "expired" files from
pool to free up disk space?

I'm running the nightly on a very very low schedule , but right now I
have to run it to clean up as much as possible

Any hint ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync without checksum

2019-11-30 Thread Gandalf Corvotempesta
please ignore the previous email, checksumming is done only on full
backups, not on incrementals.
I've saw that nightly job (i'm using 1/128 cycle) is veery
slow, it takes about 5 days to do
1/128. when the nightly is running , everything else will slow down.


Il giorno sab 30 nov 2019 alle ore 10:44 Gandalf Corvotempesta
 ha scritto:
>
> Hi to all.
> I'm fighting against very very very slow backuppc backups.
> I've found that --checksum args to rsync will slow down A LOT everything.
>
> Is it safe to remove it ? I'm not worried about transferring more data
> over the network (i'm on a local net) and even if incremental backups
> would be bigger, but I have to keep the I/O and load as low as
> possible and calculating checksum every time is not good fo me.
>
> Any suggestions ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync without checksum

2019-11-30 Thread Gandalf Corvotempesta
Hi to all.
I'm fighting against very very very slow backuppc backups.
I've found that --checksum args to rsync will slow down A LOT everything.

Is it safe to remove it ? I'm not worried about transferring more data
over the network (i'm on a local net) and even if incremental backups
would be bigger, but I have to keep the I/O and load as low as
possible and calculating checksum every time is not good fo me.

Any suggestions ?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wrong backup count

2019-08-08 Thread Gandalf Corvotempesta
Il giorno gio 8 ago 2019 alle ore 04:46 Craig Barratt via
BackupPC-users  ha scritto:
> Please look in the LOG files (main BackupPC LOG and per-host LOG) to see 
> whether backups were started or not, and whether that matches what your 
> configuration specifies.

Per host log:

2019-08-01 23:11:52 incr backup started for directory everything
2019-08-01 23:40:32 incr backup 541 complete, 42706 files, 5339379652
bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)
2019-08-05 00:15:02 incr backup started for directory everything
2019-08-05 00:49:24 incr backup 542 complete, 42700 files, 5347717676
bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] PoolSizeNightlyUpdatePeriod and BackupPCNightlyPeriod

2019-08-08 Thread Gandalf Corvotempesta
Any advice on this ? I have to keep the nighly job as light as possible

Il giorno ven 19 lug 2019 alle ore 11:31 Gandalf Corvotempesta
 ha scritto:
>
> On official docs, max value is 16. As I have a very slow server with a
> very big pool, is 16 the max value I can set or it's just an example ?
>
> Can I set it to 32 ? cntUpdate phase is taking ages... (3 days)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wrong backup count

2019-08-08 Thread Gandalf Corvotempesta
Il giorno mar 6 ago 2019 alle ore 17:01 Norman Goldstein
 ha scritto:
> $Conf{IncrKeepCnt} = 6;

I have:
$Conf{IncrKeepCnt} = 6;

> $Conf{IncrKeepCntMin} = 1;
> $Conf{IncrAgeMax} = 30;

I have:

$Conf{IncrKeepCntMin} = 1;
$Conf{IncrAgeMax} = 10;


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Wrong backup count

2019-08-06 Thread Gandalf Corvotempesta
Can someone explain me why I have only 4 backups ?
It should keep at least 7 backups (1 full, 6 incrementals)

Any idea ? Which setting should I check ?
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] PoolSizeNightlyUpdatePeriod and BackupPCNightlyPeriod

2019-07-19 Thread Gandalf Corvotempesta
On official docs, max value is 16. As I have a very slow server with a
very big pool, is 16 the max value I can set or it's just an example ?

Can I set it to 32 ? cntUpdate phase is taking ages... (3 days)


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-11-05 Thread Gandalf Corvotempesta
What's the difference between a filled backup and a full one?

Currently I have set FullPeriod to 120 as we would like to only do
incrementals, but what's the fillcycle?
Doc's are unclear about this, at least for non native English readers




Il 6 nov 2017 2:38 AM, "Craig Barratt via BackupPC-users" <
backuppc-users@lists.sourceforge.net> ha scritto:

Removing --checksum will make an rsync full behave just like an incremental.

An equivalent, and clearer, way to do that is to only do incrementals.
BackupPC 4.x allows you to do that.  That can be accomplished by setting
$Conf{FullPeriod} to a large value.  You should also set $Conf{FillCycle}
to, eg, 7, so that every 7th backup is stored filled (doesn't affect the
client transfer).

I agree with Les that a reasonable compromise is to set $Conf{FullPeriod}
to, eg, 28 or 56 so you do actually get a full backup every 4 or 8 weeks.

Craig

On Fri, Oct 27, 2017 at 9:42 AM, Les Mikesell <lesmikes...@gmail.com> wrote:

> On Fri, Oct 27, 2017 at 10:11 AM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
> > I'm using ZFS, so checksumming is done by ZFS itself, is not an issue
> for me
> > to skip any data corruption check, as zfs does this automatically
> >
> > What I would like is to keep load as low as possible on clients and
> > checksumming every file is slowing down everything
>
> I don't currently have a system running so I can't give very specific
> advice, but if I were doing it I'd probably try to fix the schedule to
> do fulls every 4 or 8 weeks and make them happen on weekends if that
> is down time on the clients, skewing them so different large clients
> get the full on different weekends and ones that complete overnight on
> weekdays.Alternatively, if the target data is neatly subdivided
> into top level directories, I might try to split runs to a single
> large host giving it multiple names, each with different shares, using
> ClientNameAlias to point it to the same target to make it possible to
> split the fulls into different days so each completes in the available
> time.
>
> --
>Les Mikesell
>  lesmikes...@gmail.com
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-10-27 Thread Gandalf Corvotempesta
Bpc is able to transfer only changed files even without checksum. If not,
incremental backups (that doesn't use checksum) won't be possible, that why
I'm asking if checksum is mandatory even for fulls

Il 27 ott 2017 6:26 PM, "Stefan Peter" <s_pe...@swissonline.ch> ha scritto:

> Dear Gandalf Corvotempesta
> On 27.10.2017 17:11, Gandalf Corvotempesta wrote:
> > I'm using ZFS, so checksumming is done by ZFS itself, is not an issue
> > for me to skip any data corruption check, as zfs does this automatically
>
> But this won't help BackupPC to decide which files have changed and,
> therefore, need to be transfered from the client to the server.
>
> With kind regards
>
> Stefan Peter
>
>
> --
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
> A: Top-posting.
> Q: What is the most annoying thing in e-mail?
> (See https://en.wikipedia.org/wiki/Posting_style for details)
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-10-27 Thread Gandalf Corvotempesta
I'm using ZFS, so checksumming is done by ZFS itself, is not an issue for
me to skip any data corruption check, as zfs does this automatically

What I would like is to keep load as low as possible on clients and
checksumming every file is slowing down everything

Il 27 ott 2017 5:04 PM, "Les Mikesell"  ha scritto:

On Fri, Oct 27, 2017 at 9:31 AM, B  wrote:
>
> Correction (as often,I read much too fast):
>
>> This i going against: "I don't think so, because on incrementals BPC
>> doesn't use "--checksum" at all." (v.4.x doc):
>
> The doc doesn't speak about incrementals (only fulls), but to be sure
> about this, you should look at rsync_bpc source.
>

The default for rsync is to quickly skip any files where the timestamp
and length match the existing copy.  v3 used --ignore-times on full
runs to go through the motions of transferring by comparing block
checksums and transferring any differences.  --checksum is similar but
uses a single checksum over the whole file.   I thought in v4 this
mechanism is also related to the ability to match copied, moved or
renamed files to existing matching content in the pool, so removing it
might be a bad idea aside from eliminating the check for corruption or
changes in content that don't update the directory/inode.

--
   Les Mikesell
lesmikes...@gmail.com


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-10-27 Thread Gandalf Corvotempesta
In my case, using checksum will slow down everything about 10 times that's
why I've asked

A full backup without checksum usually takes about 6 hours, with checksum I
need 2 days

Il 27 ott 2017 4:25 PM, "B" <lazyvi...@gmx.com> ha scritto:

> On Fri, 27 Oct 2017 12:56:36 +0200
> Gandalf Corvotempesta <gandalf.corvotempe...@gmail.com> wrote:
>
> > What happens if I remove "--checksum" from "full" backups ?
>
> Monstrosities:
> * an A380 will holographically crash onto your house,
> * your dog/cat/children/wife/goldfish will turn gay,
> * you'll awake one morning and all your machines will be reinstalled with
>   DOS-2.0,
> * you'll dream of Bill Gates every night until you pass away,
> etc…
>
> and apart that, may be:
> http://backuppc.sourceforge.net/faq/BackupPC.html#Rsync-checksum-caching
> can help as a base; in v.4.x, there are some light differences:
> http://backuppc.sourceforge.net/BackupPC-4.1.3.html
>
> This i going against: "I don't think so, because on incrementals BPC
> doesn't use "--checksum" at all." (v.4.x doc):
>
> $Conf{RsyncFullArgsExtra} = [ ... ];
>
> Additional arguments for a full rsync or rsyncd backup.
>
> The --checksum argument causes the client to send full-file checksum
> for every file (meaning the client reads every file and computes the
> checksum, which is sent with the file list). On the server,
> rsync_bpc will skip any files that have a matching full-file
> checksum, and size, mtime and number of hardlinks. Any file that has
> different attributes will be updating using the block rsync
> algorithm.
>
> In V3, full backups applied the block rsync algorithm to every file,
> which is a lot slower but a bit more conservative. To get that
> behavior, replace --checksum with --ignore-times.
>
> the server may not send any chksum command, but this states that the
> client will anyway use them.
>
> So I'll join  "l, rick" saying that if you deactivate it, your full
> backups will take "a while" - test it, but you won't love it.
>
> Jean-Yves
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-10-27 Thread Gandalf Corvotempesta
2017-10-27 15:10 GMT+02:00 l, rick :
> As I understand, you will pull all new files, instead of checking time
> stamps and hashing both ends, wasting storage space, as well as putting
> unneeded usage on the network.

I don't think so, because on incrementals BPC doesn't use "--checksum" at all.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BPC4: checksum

2017-10-27 Thread Gandalf Corvotempesta
What happens if I remove "--checksum" from "full" backups ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Archive to tape library

2017-10-12 Thread Gandalf Corvotempesta
Hi to all
Is possible to use backuppc archive function to make tar archives on a tape
library?

While backuppc able to automatically change tapes?
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automatic backups not working

2017-10-07 Thread Gandalf Corvotempesta
It doesn't log anything.
Simply it doesn't connect and wait up to the timeout value

If I run backup manually from the interface, it works

Il 7 ott 2017 11:04 PM, "Michael Stowe" <michael.st...@member.mensa.org> ha
scritto:

> On 2017-10-07 02:07, Gandalf Corvotempesta wrote:
>
> Any hint ?
>
> 2017-10-06 9:38 GMT+02:00 Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com>:
>
> Hi, I have a strange issue. Automatic backup scheduling is unable to
> backup (every time) 3 hosts. If I abort the automatic scheduled backup and
> run it manually from CGI, backup is running properly.
>
> Any idea ?
>
> “Unable to backup” is virtually nothing to go on. BackupPC has logs that
> can be reviewed, both for the main backup and for individual transports.
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Automatic backups not working

2017-10-07 Thread Gandalf Corvotempesta
Any hint ?

2017-10-06 9:38 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Hi,
> I have a strange issue.
> Automatic backup scheduling is unable to backup (every time) 3 hosts.
> If I abort the automatic scheduled backup and run it manually from
> CGI, backup is running properly.
>
> Any idea ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Automatic backups not working

2017-10-06 Thread Gandalf Corvotempesta
Hi,
I have a strange issue.
Automatic backup scheduling is unable to backup (every time) 3 hosts.
If I abort the automatic scheduled backup and run it manually from
CGI, backup is running properly.

Any idea ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Graph in dashboard

2017-10-05 Thread Gandalf Corvotempesta
2017-10-05 11:56 GMT+02:00 Alexander Moisseev via BackupPC-users
:
> In BackupPC v4 if you set "$Conf{PoolSizeNightlyUpdatePeriod}" to N (default
> is 16) nightly will process 1/N of the pool every night.
> Pool graphs will be updated on every nightly run (i.e. every night).

I'm referring to BackupPCNightlyPeriod that is set as 1 as default.
Now i've set as 4. This shouldn't affect the graphs updates, right ?
Because I've seen that when nighly is interrupted, graphs are not updated.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Graph in dashboard

2017-10-05 Thread Gandalf Corvotempesta
Are graphs in dashboard updated after a nightly process has processes
the whole pool ?
In example, if I set the nightly process to scan pool in 2 or 3 days,
graphs would be updated every day anyway, or after 2 or 3 days ?

Asking this because I want daily graphs but as nightly is a very very
very heavy process (load is more than 10 when running) I would like to
split the nightly in 4 days or so.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] nightly cntUpdate restarting

2017-09-27 Thread Gandalf Corvotempesta
I've seen that "cntUpdate" phase of BackupPC_nightly is restarting
multiple times.
This morning, was at 94/95, then after a couple of hours i've seen
that was at 70/95, thus it was restarted.

Right now, is again at 66/95

Nothing wrote in the log.
Is this normal ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Connection reset by peer (104)

2017-09-25 Thread Gandalf Corvotempesta
Setting a $Conf{ClientTimeout} lower than "timeout" value wrote on
rsyncd server, will result in

rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
Connection reset by peer (104)

after a couple of seconds.

I've set "43700" on rsyncd.conf but tried to use a lower timeout value
from the client side (900 seconds). I always thought that client was
allowed to use a lower timeout value but immediatly
after changing this value, new bakups started to fails.

Reverting that back to the same value as rsyncd (or to a greater
value), backups was working again.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-23 Thread Gandalf Corvotempesta
Without using checksum, how backuppc does the pool match?

AFAIK it will use the checksum as filename but if checksum is not sent
during the transfer..

Anyway, I've seen that checksum is used only on fulls. Currently I've set a
full every 60 days, because incrementals done after a successful full are
rocket fast

Would be better, for me, to totally remove the checksum option even from
full, staying with the default rsync behavior but I don't know if this
makes a bad use of backuppc pool

Il 23 set 2017 4:33 AM, "Adam Goryachev" <
mailingli...@websitemanagers.com.au> ha scritto:

> 2017-09-22 17:24 GMT+02:00 Gandalf Corvotempesta
>> <gandalf.corvotempe...@gmail.com>:
>>
>>> 2017-09-22 17:20 GMT+02:00 Les Mikesell <lesmikes...@gmail.com>:
>>>
>>>> How does your overall CPU and RAM use look while this is happening?
>>>> Remember, your situation is unusual in that you are competing with the
>>>> ZFS compression activity.
>>>>
>>> CPU almost idle, RAM used at 70%, due to ZFS ARC cache (50% of ram)
>>> No swap.
>>>
>>
> On 23/9/17 02:50, Gandalf Corvotempesta wrote:
>
>> Just removed "--checksum" from the BackupPC arguments.
>> Now is... FAAAST
>>
>> What i've backupped in about 40hours, now took 60 minutes.
>>
>> YES: 40 hours => 60 minutes.
>>
>> Is --checksum really needed ? (checksum is also missing from rsnapshot
>> arguments, that's why rsnapshot is rocket fast)
>>
> I can't be sure for BPC4, but maybe you need more than one full backup to
> get the checkum information available to BPC. I think on v3 you needed two
> full backups before this would happen.
> The other point to consider, is that this shows you *DID* have a
> performance issue, but you didn't seem to find it. checksum will increase
> the read load and CPU load on at least the client (and possibly BPC server
> depending on where it gets the checksum info from). So you should have seen
> that you were being limited by disk IO or CPU on either BPC server, or the
> client. I'm not sure of the memory requirement for the checksum option, but
> this too might have been an issue, especially if BPC tried to uncompress
> the file into memory. Also, all of this would trash your disk read cache on
> both systems, further increasing demands on the disks.
>
> Whether you need to use --checksum or not, will depend on if you are happy
> to potentially skip backing up some files without knowing about it until
> you need to do a restore. Of course, this is a little contrived, as it
> still requires:
> a) size doesn't change
> b) timestamp doesn't change
> c) content *does* change
> That is not a normal process, but it is the corner case that always ends
> up being the most important file ;)
>
>
>
> Regards,
> Adam
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
Just removed "--checksum" from the BackupPC arguments.
Now is... FAAAST

What i've backupped in about 40hours, now took 60 minutes.

YES: 40 hours => 60 minutes.

Is --checksum really needed ? (checksum is also missing from rsnapshot
arguments, that's why rsnapshot is rocket fast)

2017-09-22 17:24 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> 2017-09-22 17:20 GMT+02:00 Les Mikesell <lesmikes...@gmail.com>:
>> How does your overall CPU and RAM use look while this is happening?
>> Remember, your situation is unusual in that you are competing with the
>> ZFS compression activity.
>
> CPU almost idle, RAM used at 70%, due to ZFS ARC cache (50% of ram)
> No swap.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
2017-09-22 17:20 GMT+02:00 Les Mikesell :
> How does your overall CPU and RAM use look while this is happening?
> Remember, your situation is unusual in that you are competing with the
> ZFS compression activity.

CPU almost idle, RAM used at 70%, due to ZFS ARC cache (50% of ram)
No swap.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
Running transfers with loglevel 4, i can see that most of slowdown are
caused by:

G bpc_file_checksum()

calls.

2017-09-22 15:31 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Also tried with "nc". I'm able to push/pull 110-120MB/s between these
> two servers.
>
> 2017-09-22 15:22 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com>:
>> I've made a little bit progress.
>> Slow select calls are always network related but my net is not
>> saturated and I'm able to push 1gbp/s regurally with iperf.
>> Moreover, on client side, rsync is sending files very very fast.
>>
>> Could be a bottleneck in BackupPC network management ?
>>
>> 2017-09-22 15:12 GMT+02:00 Gandalf Corvotempesta
>> <gandalf.corvotempe...@gmail.com>:
>>> 2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
>>> <backuppc-users@lists.sourceforge.net>:
>>>> I recommend running strace -p PID -T on the rsync_bpc process to see what 
>>>> it
>>>> is up to, and how long various system calls take.  I agree your backups
>>>> should run much faster.
>>>
>>> Here is an extract.
>>> Some calls are really slow, more than 1 second.
>>> They seems to be select, right ? Writes seems to be ok, less than 1/10 
>>> second
>>>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3452] read(5, "75601860.M846377P28101.xx.gu"..., 8184) = 4081 
>>> <0.35>
>>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=59,
>>> tv_usec=220734}) <0.779284>
>>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.35>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>>> [3], left {tv_sec=59, tv_usec=98}) <0.28>
>>> [pid  3453] read(3, ".yy,S=1262266,W=1278715:2,"..., 4092) = 2896 <0.34>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>>> [3], left {tv_sec=59, tv_usec=98}) <0.29>
>>> [pid  3453] read(3, "383235089.M974673P2358.xx.gu"..., 1196) = 1196 
>>> <0.27>
>>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>>> [6], left {tv_sec=59, tv_usec=98}) <0.35>
>>> [pid  3453] write(6, "381086351.M234084P17834.xx.g"..., 4087) = 4087 
>>> <0.39>
>>> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
>>> tv_usec=220263}) <0.779785>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3452] read(5, "381086351.M234084P17834.xx.g"..., 8184) = 4087 
>>> <0.40>
>>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
>>> tv_usec=683238}) <1.316780>
>>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.56>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>>> [3], left {tv_sec=59, tv_usec=98}) <0.29>
>>> [pid  3453] read(3,
>>> "t,S=9113,W=9346:2,S\0\231#V\10\300,\270\215\21z\360\250"..., 4092) =
>>> 4092 <0.28>
>>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>>> [6], left {tv_sec=59, tv_usec=98}) <0.47>
>>> [pid  3453] write(6, "45773421.M45371P9859.xx.gues"..., 4091) = 4091 
>>> <0.51>
>>> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
>>> tv_usec=682833}) <1.317230>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3452] read(5, "45773421.M45371P9859.xx.gues"..., 8184) = 4091 
>>> <0.33>
>>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
>>> tv_usec=879335}) <1.120688>
>>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.57>
>>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>>> [3], left {tv_sec=59, tv_usec=97}) <0.29>
>>> [pid  3453] read(3, ".it,S=17245,W=17685:2,S\0]CR\364\340\327Q\f"...,
>>> 4092) = 4092 <0.26>
>>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>>> [6], left {tv_sec=59

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
Also tried with "nc". I'm able to push/pull 110-120MB/s between these
two servers.

2017-09-22 15:22 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> I've made a little bit progress.
> Slow select calls are always network related but my net is not
> saturated and I'm able to push 1gbp/s regurally with iperf.
> Moreover, on client side, rsync is sending files very very fast.
>
> Could be a bottleneck in BackupPC network management ?
>
> 2017-09-22 15:12 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com>:
>> 2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
>> <backuppc-users@lists.sourceforge.net>:
>>> I recommend running strace -p PID -T on the rsync_bpc process to see what it
>>> is up to, and how long various system calls take.  I agree your backups
>>> should run much faster.
>>
>> Here is an extract.
>> Some calls are really slow, more than 1 second.
>> They seems to be select, right ? Writes seems to be ok, less than 1/10 second
>>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3452] read(5, "75601860.M846377P28101.xx.gu"..., 8184) = 4081 
>> <0.35>
>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=59,
>> tv_usec=220734}) <0.779284>
>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.35>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>> [3], left {tv_sec=59, tv_usec=98}) <0.28>
>> [pid  3453] read(3, ".yy,S=1262266,W=1278715:2,"..., 4092) = 2896 <0.34>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>> [3], left {tv_sec=59, tv_usec=98}) <0.29>
>> [pid  3453] read(3, "383235089.M974673P2358.xx.gu"..., 1196) = 1196 
>> <0.27>
>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>> [6], left {tv_sec=59, tv_usec=98}) <0.35>
>> [pid  3453] write(6, "381086351.M234084P17834.xx.g"..., 4087) = 4087 
>> <0.39>
>> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
>> tv_usec=220263}) <0.779785>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3452] read(5, "381086351.M234084P17834.xx.g"..., 8184) = 4087 
>> <0.40>
>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
>> tv_usec=683238}) <1.316780>
>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.56>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>> [3], left {tv_sec=59, tv_usec=98}) <0.29>
>> [pid  3453] read(3,
>> "t,S=9113,W=9346:2,S\0\231#V\10\300,\270\215\21z\360\250"..., 4092) =
>> 4092 <0.28>
>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>> [6], left {tv_sec=59, tv_usec=98}) <0.47>
>> [pid  3453] write(6, "45773421.M45371P9859.xx.gues"..., 4091) = 4091 
>> <0.51>
>> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
>> tv_usec=682833}) <1.317230>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3452] read(5, "45773421.M45371P9859.xx.gues"..., 8184) = 4091 
>> <0.33>
>> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
>> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
>> tv_usec=879335}) <1.120688>
>> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.57>
>> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
>> [3], left {tv_sec=59, tv_usec=97}) <0.29>
>> [pid  3453] read(3, ".it,S=17245,W=17685:2,S\0]CR\364\340\327Q\f"...,
>> 4092) = 4092 <0.26>
>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
>> [6], left {tv_sec=59, tv_usec=98}) <0.64>
>> [pid  3453] write(6, "389990620.M619008P11470.xx.g"..., 4092) = 4092 
>> <0.51>
>> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
>> tv_usec=878920}) <1.121130>
>> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0} 
>> [pid  3452] read(5,  
>> [pid  3453] <... select resumed> )  = 1 (out [6], left {tv_sec=59,
>> tv_usec=98}) <0.29>
>> [pid  3452] <

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
I've made a little bit progress.
Slow select calls are always network related but my net is not
saturated and I'm able to push 1gbp/s regurally with iperf.
Moreover, on client side, rsync is sending files very very fast.

Could be a bottleneck in BackupPC network management ?

2017-09-22 15:12 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> 2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
> <backuppc-users@lists.sourceforge.net>:
>> I recommend running strace -p PID -T on the rsync_bpc process to see what it
>> is up to, and how long various system calls take.  I agree your backups
>> should run much faster.
>
> Here is an extract.
> Some calls are really slow, more than 1 second.
> They seems to be select, right ? Writes seems to be ok, less than 1/10 second
>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3452] read(5, "75601860.M846377P28101.xx.gu"..., 8184) = 4081 <0.35>
> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=59,
> tv_usec=220734}) <0.779284>
> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.35>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
> [3], left {tv_sec=59, tv_usec=98}) <0.28>
> [pid  3453] read(3, ".yy,S=1262266,W=1278715:2,"..., 4092) = 2896 <0.34>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
> [3], left {tv_sec=59, tv_usec=98}) <0.29>
> [pid  3453] read(3, "383235089.M974673P2358.xx.gu"..., 1196) = 1196 <0.27>
> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
> [6], left {tv_sec=59, tv_usec=98}) <0.35>
> [pid  3453] write(6, "381086351.M234084P17834.xx.g"..., 4087) = 4087 
> <0.39>
> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
> tv_usec=220263}) <0.779785>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3452] read(5, "381086351.M234084P17834.xx.g"..., 8184) = 4087 <0.40>
> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
> tv_usec=683238}) <1.316780>
> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.56>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
> [3], left {tv_sec=59, tv_usec=98}) <0.29>
> [pid  3453] read(3,
> "t,S=9113,W=9346:2,S\0\231#V\10\300,\270\215\21z\360\250"..., 4092) =
> 4092 <0.28>
> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
> [6], left {tv_sec=59, tv_usec=98}) <0.47>
> [pid  3453] write(6, "45773421.M45371P9859.xx.gues"..., 4091) = 4091 
> <0.51>
> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
> tv_usec=682833}) <1.317230>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3452] read(5, "45773421.M45371P9859.xx.gues"..., 8184) = 4091 <0.33>
> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
> tv_usec=879335}) <1.120688>
> [pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.57>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
> [3], left {tv_sec=59, tv_usec=97}) <0.29>
> [pid  3453] read(3, ".it,S=17245,W=17685:2,S\0]CR\364\340\327Q\f"...,
> 4092) = 4092 <0.26>
> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
> [6], left {tv_sec=59, tv_usec=98}) <0.64>
> [pid  3453] write(6, "389990620.M619008P11470.xx.g"..., 4092) = 4092 
> <0.51>
> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
> tv_usec=878920}) <1.121130>
> [pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0} 
> [pid  3452] read(5,  
> [pid  3453] <... select resumed> )  = 1 (out [6], left {tv_sec=59,
> tv_usec=98}) <0.29>
> [pid  3452] <... read resumed> "389990620.M619008P11470.xx.g"...,
> 8184) = 4092 <0.32>
> [pid  3453] write(6, "\333\32;^\26\252\321[:>8", 11 
> [pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3453] <... write resumed> )   = 11 <0.30>
> [pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
> tv_usec=98}) <0.26>
> [pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
> [pid  3452] read(5, "\3

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-22 Thread Gandalf Corvotempesta
2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
:
> I recommend running strace -p PID -T on the rsync_bpc process to see what it
> is up to, and how long various system calls take.  I agree your backups
> should run much faster.

Here is an extract.
Some calls are really slow, more than 1 second.
They seems to be select, right ? Writes seems to be ok, less than 1/10 second

[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3452] read(5, "75601860.M846377P28101.xx.gu"..., 8184) = 4081 <0.35>
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=59,
tv_usec=220734}) <0.779284>
[pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.35>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=98}) <0.28>
[pid  3453] read(3, ".yy,S=1262266,W=1278715:2,"..., 4092) = 2896 <0.34>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=98}) <0.29>
[pid  3453] read(3, "383235089.M974673P2358.xx.gu"..., 1196) = 1196 <0.27>
[pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
[6], left {tv_sec=59, tv_usec=98}) <0.35>
[pid  3453] write(6, "381086351.M234084P17834.xx.g"..., 4087) = 4087 <0.39>
[pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
tv_usec=220263}) <0.779785>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3452] read(5, "381086351.M234084P17834.xx.g"..., 8184) = 4087 <0.40>
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
tv_usec=683238}) <1.316780>
[pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.56>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=98}) <0.29>
[pid  3453] read(3,
"t,S=9113,W=9346:2,S\0\231#V\10\300,\270\215\21z\360\250"..., 4092) =
4092 <0.28>
[pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
[6], left {tv_sec=59, tv_usec=98}) <0.47>
[pid  3453] write(6, "45773421.M45371P9859.xx.gues"..., 4091) = 4091 <0.51>
[pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
tv_usec=682833}) <1.317230>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3452] read(5, "45773421.M45371P9859.xx.gues"..., 8184) = 4091 <0.33>
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
tv_usec=879335}) <1.120688>
[pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.57>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=97}) <0.29>
[pid  3453] read(3, ".it,S=17245,W=17685:2,S\0]CR\364\340\327Q\f"...,
4092) = 4092 <0.26>
[pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
[6], left {tv_sec=59, tv_usec=98}) <0.64>
[pid  3453] write(6, "389990620.M619008P11470.xx.g"..., 4092) = 4092 <0.51>
[pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
tv_usec=878920}) <1.121130>
[pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0} 
[pid  3452] read(5,  
[pid  3453] <... select resumed> )  = 1 (out [6], left {tv_sec=59,
tv_usec=98}) <0.29>
[pid  3452] <... read resumed> "389990620.M619008P11470.xx.g"...,
8184) = 4092 <0.32>
[pid  3453] write(6, "\333\32;^\26\252\321[:>8", 11 
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3453] <... write resumed> )   = 11 <0.30>
[pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=59,
tv_usec=98}) <0.26>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3452] read(5, "\333\32;^\26\252\321[:>8", 8184) = 11 <0.11>
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3453] <... select resumed> )  = 1 (in [3], left {tv_sec=58,
tv_usec=696881}) <1.303136>
[pid  3453] read(3, "\374\17\0\7", 4)   = 4 <0.28>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=98}) <0.27>
[pid  3453] read(3, ".w4w.yy,S=1841,W=1902:2,S\0"..., 4092) = 2896 <0.26>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0}) = 1 (in
[3], left {tv_sec=59, tv_usec=999866}) <0.000149>
[pid  3453] read(3, "s\311\34\344:>:74312186.M702139P19191.x3"...,
1196) = 1196 <0.31>
[pid  3453] select(7, NULL, [6], [6], {tv_sec=60, tv_usec=0}) = 1 (out
[6], left {tv_sec=59, tv_usec=98}) <0.38>
[pid  3453] write(6, "14095904.M963616P21646.xx.gu"..., 4078) = 4078 <0.48>
[pid  3452] <... select resumed> )  = 1 (in [5], left {tv_sec=58,
tv_usec=695988}) <1.304041>
[pid  3453] select(4, [3], [], NULL, {tv_sec=60, tv_usec=0} 
[pid  3452] read(5, "14095904.M963616P21646.xx.gu"..., 8184) = 4078 <0.17>
[pid  3452] select(6, [5], [], NULL, {tv_sec=60, 

Re: [BackupPC-users] BPC4: shared files from pool and compression

2017-09-21 Thread Gandalf Corvotempesta
2017-09-21 16:35 GMT+02:00 Craig Barratt via BackupPC-users
:
> You can look in the XferLOG file to see whether a file is transferred or
> not, and whether it matches the pool.

AFAIK, XferLOG is not dumped in real time, seems to be flushed at some interval.
BPC keeps files in memory and the write to disks after a while? This
could explain why
I can see some short-freeze on source server (by looking at strace or
rsyncd.log) every
30-40 seconds.

Tons of files are transfered, then a short freeze, then new files are
transfered.
This doesn't happen with plain rsync, only with BPC.

Seems that some files are keept in memory then flushed to disks.

> Mixing uncompressed and compressed backups is a bad idea (ie: changing the
> compress level between non-zero and zero.  BackupPC stores the compressed
> and uncompressed pool files separately, so you'll end up with two copies of
> many or every file.  It's ok to change the compression level (eg, from 3 to
> 1).  That will just mean new pool files will use the new compression.
> Overall, file compression adds very little overhead, since when a pool file
> matches, it does't have to be compressed.

I'll go with compression 0. ZFS is compressing, there is no need to
add an additiona layer.
What happen if I delete backup 0 (full) keeping only backup 1 (incremental) ?
The incremental one would be stale of 100% available ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-21 Thread Gandalf Corvotempesta
2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
:
> Back to OP - from the rsync_bpc args you sent, it looks like you have
> XferLogLevel set to 0.  I'd recommend at least 1 or 2.  That would help
> confirm your excludes are correct (they do look pretty comprehensive, but
> you should verify the actual files being backed up match your intended
> excludes).  As earlier posters pointed out, incorrect excludes could mean
> you are trying to backup potentially very large files.

Is possible to show, somewhere, when a file is not transferred because
already exists in the pool?
rsyncd.log on source server doesn't show files not being transferred
(only logs transferred files) thus looking
at that log, bpc seems to be frozen. I'm sure that is not frozen by
stracing the process and seeing some lstat/open calls.

Would be usefull to have a sort of "rsyncd.log" on backuppc size
showing if a file is transferred or not and why is not transferred.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-21 Thread Gandalf Corvotempesta
2017-09-21 14:37 GMT+02:00 Craig Barratt via BackupPC-users
:
> Yes, Les is right - if you change compression (on to off, or off to on) then
> the next backup will be like a first-time backup - it will be forced to a
> full and will not take advantage of any prior backup.

I know, but in my case, when i've changed compression to 0, backup was
already running, thus it was
not affected by this change.

Anyway, next backups should run a little bit faster as they don't have
to compress every file.

> Back to OP - from the rsync_bpc args you sent, it looks like you have
> XferLogLevel set to 0.  I'd recommend at least 1 or 2.  That would help
> confirm your excludes are correct (they do look pretty comprehensive, but
> you should verify the actual files being backed up match your intended
> excludes).  As earlier posters pointed out, incorrect excludes could mean
> you are trying to backup potentially very large files.

Very large file would produce some output on strace and is not my case.

> What is the version of your remote rsync?

I was on 3.1.0, but i've updated to the latest one.
Nothing changed.

> Have you confirmed you are not running short of server memory?

No memory issue, it's the first thing that i've checked.

> You could see whether ZFS is the issue by (temporarily) running a new
> BackuipPC instance with storage on a different file system (eg, ext4).

This is a new install, previously (about 15-20 days ago) i was on XFS
and even before on ext4.
ZFS is the fastest one (particularry when deleting huge directories
with tons of hardlinks made by rsnapshot)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 17:15 GMT+02:00 Ray Frush :
> You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
> today's world.  I get bothered when storage is slower than a single 10K RPM
> drive (~100-120MB/sec).  I wonder how fast metadata operations are.
> bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
> intensive, and has to read a lot of metadata to properly place files in the
> CPOOL.   Compare those results with other storage to gauge how well your ZFS
> is performing.   I'm not a ZFS expert.

Yes, is not very fast but keep in mind that i'm using SATA disks.
But the issue is not the server performance, because all other software are
able to backup in a very short time, with the same hardware.

> 2) rsyncd vs rsync:
> When BackupPC uses the 'rsync' method, it uses ssh to start a dedicated
> rsync server on the client system with parameters picked by BackupPC
> developers.
> When you use the 'rsyncd' method,  the options on the client side were
> picked by you, and may not play well with BackupPC.  It would be easy to
> test around this by setting up backupPC to use the 'rsync' method instead
> (setting up ssh correctly of course) and seeing if you note any improvement.
> That will isolate any issues with your rsyncd configs.

Ok, I can try that.

> A 4x 1Gbps network link will look exactly like a single 1Gbps per network
> channel (stream) unless you've got some really nice port aggregation
> hardware that can spray data at 4Gbps across those.   As such, unless you
> have parallel jobs running (multithreaded), I wouldn't expect to see any
> product do better than 1Gbps from any single client in your environment.
> The BackupPC server, running multiple backup jobs could see a benefit from
> the bonded connection, being able to manage 4 1Gpbs streams at the same
> time, under optimal conditions, which never happens.

I'm running 4 concurrent backups, with plain rsync/rsnapshot i'm able to run 8.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:52 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> running "zfs iostat" show about 50-70MB/s

Even more:

   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   6.66T  4.21T291393  3.44M  5.88M
rpool   6.66T  4.21T128  10.2K  14.3M   133M
rpool   6.66T  4.21T147  7.44K  16.6M   111M
rpool   6.66T  4.21T118  3.18K  7.55M  58.6M
rpool   6.66T  4.21T339  3.70K  32.1M   101M

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:50 GMT+02:00 Les Mikesell :
> You mentioned using zfs with compression.  What kind of disk
> performance does that give when working with odd sized chunks?

running "zfs iostat" show about 50-70MB/s

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 16:14 GMT+02:00 Ray Frush :
> Question:   Just how big is the host you're trying to backup?  GB?  number
> of files?

>From BackupPC web page: 147466.4 MB, 3465344 files.

> What is the network connection between the client and the backup
> server?

4x 1GbE bonded on both sides.


> I'm curious about what it is about your environment that is making
> it so hard to back up.

It's the same with ALL servers that i'm trying to backup.
BPC is about 12 times slower than any other tested solution.
Obviously, same backup server, same source servers.

> I believe I've mentioned my largest, hairiest server is 770GB with 6.8
> Million files.   Full backups on that system take 8.5 hours to run.
> Incrementals take 20-30 minutes.   I have no illusions that the
> infrastructure I'm using to back things up is the fastest, but it's fast
> enough for the job.

The long running backup (about 38 hours, still running) is an incremental.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-20 15:10 GMT+02:00 Craig Barratt via BackupPC-users
:
> Since you have concluded what the problem is, I don't have anything
> constructive to add.

No, i've not found the exact bug. Some help is needed.
Having a backup still running from about (right now) 38 hours when the
same host is able to backup the same server with the same rsync
arguments in about 3 hours it
making me suspicious.

I know that BPC does much more things than other software. Ok, that's
clear, but I can understand something like a little slowdown (maybe
200%? 300%?), not 12 times slower. TWELVE TIMES.
It's 1200% slower

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Gandalf Corvotempesta
2017-09-19 18:01 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Removed "--inplace" from the command like and running the same backup
> right now from BPC.
> It's too early to be sure, but seems to go further. Let's see during the 
> night.

Is still running. This is ok on one side, as "--inplace" may caused
the issue, but on the other side,
there is something not working properly in BPC. An incremental backup
is still running (about 80%)
from yesterday at 17:39.

rsync, rsnapshot, bacula, rdiff-backup, bareos, and borg took about 3
hours (some a little bit more, some a little bit less) to backup this
host in the same way (more or less, same final size of backup)
BackupPC is taking 10 times more of any other backup software, this
makes BPC unusable with huge hosts. An order of magnitude is totally
unacceptable and can only mean some bugs in the code.

As wrote many month ago, I think there are something not working
properly in BPC, it's impossible that BPC deduplication is slowing
down backups in this way.
Also, I've removed the compression, because I'm using ZFS with native
compression, thus BPC doesn't have to decompress, check local file,
compress the new one and so on.

And after the backup, refCnt and fsck is ran. For this server, the
"post-backup" phase takes another hours or two.

Maybe I have hardware issue on this backup server, but even all other
backup software that i've tried are running in this server with no
issue at all. Only BPC is slow as hell.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
Yes, running this right now on both: server and client.

2017-09-19 18:00 GMT+02:00 Les Mikesell <lesmikes...@gmail.com>:
> On Tue, Sep 19, 2017 at 10:47 AM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>> 2017-09-19 17:41 GMT+02:00 Les Mikesell <lesmikes...@gmail.com>:
>>> If your client rsyncd is configured to write a log file you should be
>>> able to see the invocation there.   Just guessing, I'd say there is
>>> probably something wrong in your excludes and you are wandering into
>>> the /sys, /proc, or /dev directories and hanging when you access the
>>> contents.
>>
>> rsyncd is logging, but there isn't any invocation argument.
>> Anyway, the same command line (except for the "bpc" custom arguments)
>> is used by my
>> with rsnapshot and plain "rsync" and i'm able to backup the whole
>> server without issues.
>>
>> Maybe BPC is unable to manage some kind of files that plain rsync is
>> able to handle ?
>
> A brute-force approach to debugging would be to start an strace on the
> rsyncd client soon after the backup starts and after it hangs try to
> figure out the failing operation and hope you have enough scroll back
> buffer to find where the relevant file descriptor was opened so you
> know what it was.
>
> --
> Les Mikesell
>   lesmikes...@gmail.com
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
2017-09-19 16:51 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> /usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
> --bpc-host-name myhost --bpc-share-name everything --bpc-bkup-num 1
> --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> --bpc-bkup-inode0 7047290 --bpc-attrib-new --bpc-log-level 0 --super
> --recursive --protect-args --numeric-ids --perms --owner --group -D
> --times --links --hard-links --delete --delete-excluded --partial
> --log-format=log: %o %i %B %8U,%8G %9l %f%L --stats
> --block-size=131072 --inplace --timeout=72000
> --password-file=/var/backups/backuppc/pc/myhost/.rsyncdpw24363
> --exclude=var/backups/* --exclude=admin_backups/*
> --exclude=reseller_backups/* --exclude=user_backups/* --exclude=tmp/*
> --exclude=proc/* --exclude=sys/* --exclude=media/* --exclude=mnt/*
> --exclude=tmp/* --exclude=wp-content/cache/object/*
> --exclude=wp-content/cache/page_enhanced/*
> --exclude=wp-content/cache/db/*
> --exclude=usr/local/directadmin/data/tickets/* --exclude=var/cache/*
> --exclude=var/log/directadmin/* --exclude=var/log/lastlog
> --exclude=var/log/rsync* --exclude=var/log/bacula/*
> --exclude=var/log/ntpstats --exclude=var/lib/mlocate
> --exclude=var/lib/mysql/* --exclude=var/lib/apt/lists/*
> --exclude=var/cache/apt/archives/* --exclude=usr/local/php55/sockets/*
> --exclude=var/run/* --exclude=var/spool/exim/*
> backuppc@myhost::everything /


Removed "--inplace" from the command like and running the same backup
right now from BPC.
It's too early to be sure, but seems to go further. Let's see during the night.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
2017-09-19 17:41 GMT+02:00 Les Mikesell :
> If your client rsyncd is configured to write a log file you should be
> able to see the invocation there.   Just guessing, I'd say there is
> probably something wrong in your excludes and you are wandering into
> the /sys, /proc, or /dev directories and hanging when you access the
> contents.

rsyncd is logging, but there isn't any invocation argument.
Anyway, the same command line (except for the "bpc" custom arguments)
is used by my
with rsnapshot and plain "rsync" and i'm able to backup the whole
server without issues.

Maybe BPC is unable to manage some kind of files that plain rsync is
able to handle ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
rsyncd is running on all servers, as i'm able to backup them properly
with plain rsync or rsnapshot.
Only BPC is freezing.

I don't use SSH at all, i'm directly connecting to rsyncd via rsync.

2017-09-19 17:19 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> Gandalf-
>
> It looks like you're using the "rsyncd" method vs "rsync" is that correct?
> I don't have experience using the 'rsyncd' method, so my ability to continue
> troubleshooting ends here.  The main thing that jumps out at me is to check
> that rsyncd is actually running on your clients, and that you can connect
> from the backuppc server using the command you found above.
>
>
> I use the 'rsync' method, and the rest of my answer below is predicated on
> that scheme:
>
> I kicked off a backup and did a 'ps -elf | grep backuppc' to get these from
> the BackupPC server:
>
> 1) the BackupPC_dump command
>
> backuppc  9603  3682  0 09:05 ?00:00:00 /usr/bin/perl
> /usr/local/BackupPC/bin/BackupPC_dump -i isast201
>
> 2) the local rsync_bpc instanc:
>
> backuppc  9606  9603  8 09:05 ?00:00:03 /usr/local/bin/rsync_bpc
> --bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
> --bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3 --bpc-bkup-prevnum
> 117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
> --bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
> --super --recursive --protect-args --numeric-ids --perms --owner --group -D
> --times --links --hard-links --delete --partial --log-format=log: %o %i %B
> %8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000 --exclude=stuff
> isast201:/ /
>
> 3) the ssh command initiated by rsync_bpc to the client to initiate the
> server:  THIS IS THE IMPORTANT ONE to test next:
>
> backuppc  9607  9606  1 09:05 ?00:00:00 /usr/bin/ssh -l root
> isast201 /usr/bin/rsync --server --sender -slHogDtpre.iLsf --iconv=UTF-8
>
> 4) The active portion of process 9606 above:
>
> backuppc  9608  9606  0 09:05 ?00:00:00 /usr/local/bin/rsync_bpc
> --bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
> --bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3 --bpc-bkup-prevnum
> 117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
> --bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
> --super --recursive --protect-args --numeric-ids --perms --owner --group -D
> --times --links --hard-links --delete --partial --log-format=log: %o %i %B
> %8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000 --exclude=stuff
> isast201:/ /
>
>
> In my example, I have setup ssh keys to allow the BackupPC user to access
> the clients.
>
>
>
> On Tue, Sep 19, 2017 at 8:51 AM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>>
>> I can't get rsync command from the client system, as "ps aux" doesn't
>> show the command invocation by the server.
>> BackupPC is running the following:
>>
>> /usr/bin/perl /usr/local/backuppc/bin/BackupPC_dump -i myhost
>>
>> spawing two identical processes:
>>
>> /usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
>> --bpc-host-name myhost --bpc-share-name everything --bpc-bkup-num 1
>> --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
>> --bpc-bkup-inode0 7047290 --bpc-attrib-new --bpc-log-level 0 --super
>> --recursive --protect-args --numeric-ids --perms --owner --group -D
>> --times --links --hard-links --delete --delete-excluded --partial
>> --log-format=log: %o %i %B %8U,%8G %9l %f%L --stats
>> --block-size=131072 --inplace --timeout=72000
>> --password-file=/var/backups/backuppc/pc/myhost/.rsyncdpw24363
>> --exclude=var/backups/* --exclude=admin_backups/*
>> --exclude=reseller_backups/* --exclude=user_backups/* --exclude=tmp/*
>> --exclude=proc/* --exclude=sys/* --exclude=media/* --exclude=mnt/*
>> --exclude=tmp/* --exclude=wp-content/cache/object/*
>> --exclude=wp-content/cache/page_enhanced/*
>> --exclude=wp-content/cache/db/*
>> --exclude=usr/local/directadmin/data/tickets/* --exclude=var/cache/*
>> --exclude=var/log/directadmin/* --exclude=var/log/lastlog
>> --exclude=var/log/rsync* --exclude=var/log/bacula/*
>> --exclude=var/log/ntpstats --exclude=var/lib/mlocate
>> --exclude=var/lib/mysql/* --exclude=var/lib/apt/lists/*
>> --exclude=var/cache/apt/archives/* --exclude=usr/local/php55/sockets/*
>> --exclude=var/run/* --exclude=var/spool/exim/*
>> backuppc@myhost::everything /
>>
>>
>>
>> standard rsync works.
>> rsnapshot works too (i'm using rsnapshot to backup this host, as BPC
>> freeze)
>>
>> 2

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
I can't get rsync command from the client system, as "ps aux" doesn't
show the command invocation by the server.
BackupPC is running the following:

/usr/bin/perl /usr/local/backuppc/bin/BackupPC_dump -i myhost

spawing two identical processes:

/usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
--bpc-host-name myhost --bpc-share-name everything --bpc-bkup-num 1
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
--bpc-bkup-inode0 7047290 --bpc-attrib-new --bpc-log-level 0 --super
--recursive --protect-args --numeric-ids --perms --owner --group -D
--times --links --hard-links --delete --delete-excluded --partial
--log-format=log: %o %i %B %8U,%8G %9l %f%L --stats
--block-size=131072 --inplace --timeout=72000
--password-file=/var/backups/backuppc/pc/myhost/.rsyncdpw24363
--exclude=var/backups/* --exclude=admin_backups/*
--exclude=reseller_backups/* --exclude=user_backups/* --exclude=tmp/*
--exclude=proc/* --exclude=sys/* --exclude=media/* --exclude=mnt/*
--exclude=tmp/* --exclude=wp-content/cache/object/*
--exclude=wp-content/cache/page_enhanced/*
--exclude=wp-content/cache/db/*
--exclude=usr/local/directadmin/data/tickets/* --exclude=var/cache/*
--exclude=var/log/directadmin/* --exclude=var/log/lastlog
--exclude=var/log/rsync* --exclude=var/log/bacula/*
--exclude=var/log/ntpstats --exclude=var/lib/mlocate
--exclude=var/lib/mysql/* --exclude=var/lib/apt/lists/*
--exclude=var/cache/apt/archives/* --exclude=usr/local/php55/sockets/*
--exclude=var/run/* --exclude=var/spool/exim/*
backuppc@myhost::everything /



standard rsync works.
rsnapshot works too (i'm using rsnapshot to backup this host, as BPC freeze)

2017-09-19 16:42 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> Gandalf-
>
> As a troubleshooting step, collect the actual running rsync commands from
> the client system, and from the BackupPC server (found in the Xferlog).
> Post them here to get a wider audience.
>
> Try running an rsync manually  using the same parameters, and see if it
> works.  My guess is not, and there is a misconfiguration that will leap out
> at you as you work through this.
>
> I had to do the same thing when I was doing an initial install.
>
> --
> Ray Frush
> Colorado State University.
>
> On Tue, Sep 19, 2017 at 2:52 AM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>>
>> Still getting the same issue.
>> Backup for a couple of host are impossible, bpc hangs (at different
>> progresses) and doesn't continue anymore.
>>
>> No load on backup server and on source server. Simply, bpc doesn't
>> transfer.
>>
>> 2017-09-18 15:38 GMT+02:00 Gandalf Corvotempesta
>> <gandalf.corvotempe...@gmail.com>:
>> > 2017-09-18 14:30 GMT+02:00 G.W. Haywood via BackupPC-users
>> > <backuppc-users@lists.sourceforge.net>:
>> >> When I first used version 4 I ran into a very similar issue, there
>> >> were one or two bug-fixes which addressed it.  You have not stated
>> >> exactly what version you are using, but first make sure that all the
>> >> BPC software is up to date.
>> >
>> > I'm using the latest version: 4.1.3
>>
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
>
>
>
> --
> Time flies like an arrow, but fruit flies like a banana.
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RefCntFsck

2017-09-19 Thread Gandalf Corvotempesta
Ok, so this is expected.
Thank you.

2017-09-19 14:39 GMT+02:00 Craig Barratt via BackupPC-users
<backuppc-users@lists.sourceforge.net>:
> If a backup fails or is aborted, BackupPC_refCntUpdate is run on the last
> two backups, independent of the $Conf{RefCntFsck} setting.
>
> Craig
>
>
> On Tuesday, September 19, 2017, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>>
>> refCnt is also run on backup #1 (that in my case is the incremental
>> that i've aborted).
>>
>> is this correct ? I had a similiar issue with the alpha version, where
>> an fsck was always executed, but AFAIK this bug was fixed when BPC 4
>> was released as stable.
>>
>> 2017-09-19 14:11 GMT+02:00 Gandalf Corvotempesta
>> <gandalf.corvotempe...@gmail.com>:
>> > Can someone explain me the meaning of $Conf{RefCntFsck} = 1; ?
>> >
>> > If I understood properly, based on docs, $Conf{RefCntFsck} = 1 will
>> > run a refCnt process if the latest backup is full.
>> >
>> > In my case, I have a single full backup and i've aborted an
>> > incremental one, then refCnt is run on the full backup.
>> >
>> > Why? I've aborted an incremental, refCnt was already executed on the
>> > full backup when I did that some days ago. Why BPC is still running
>> > refCnt ?
>>
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RefCntFsck

2017-09-19 Thread Gandalf Corvotempesta
refCnt is also run on backup #1 (that in my case is the incremental
that i've aborted).

is this correct ? I had a similiar issue with the alpha version, where
an fsck was always executed, but AFAIK this bug was fixed when BPC 4
was released as stable.

2017-09-19 14:11 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Can someone explain me the meaning of $Conf{RefCntFsck} = 1; ?
>
> If I understood properly, based on docs, $Conf{RefCntFsck} = 1 will
> run a refCnt process if the latest backup is full.
>
> In my case, I have a single full backup and i've aborted an
> incremental one, then refCnt is run on the full backup.
>
> Why? I've aborted an incremental, refCnt was already executed on the
> full backup when I did that some days ago. Why BPC is still running
> refCnt ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] RefCntFsck

2017-09-19 Thread Gandalf Corvotempesta
Can someone explain me the meaning of $Conf{RefCntFsck} = 1; ?

If I understood properly, based on docs, $Conf{RefCntFsck} = 1 will
run a refCnt process if the latest backup is full.

In my case, I have a single full backup and i've aborted an
incremental one, then refCnt is run on the full backup.

Why? I've aborted an incremental, refCnt was already executed on the
full backup when I did that some days ago. Why BPC is still running
refCnt ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Gandalf Corvotempesta
Still getting the same issue.
Backup for a couple of host are impossible, bpc hangs (at different
progresses) and doesn't continue anymore.

No load on backup server and on source server. Simply, bpc doesn't transfer.

2017-09-18 15:38 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> 2017-09-18 14:30 GMT+02:00 G.W. Haywood via BackupPC-users
> <backuppc-users@lists.sourceforge.net>:
>> When I first used version 4 I ran into a very similar issue, there
>> were one or two bug-fixes which addressed it.  You have not stated
>> exactly what version you are using, but first make sure that all the
>> BPC software is up to date.
>
> I'm using the latest version: 4.1.3

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-18 Thread Gandalf Corvotempesta
2017-09-18 14:30 GMT+02:00 G.W. Haywood via BackupPC-users
:
> When I first used version 4 I ran into a very similar issue, there
> were one or two bug-fixes which addressed it.  You have not stated
> exactly what version you are using, but first make sure that all the
> BPC software is up to date.

I'm using the latest version: 4.1.3

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackuPC 4 hang during transfer

2017-09-18 Thread Gandalf Corvotempesta
Hi,
I'm tryiig to backup 2 hosts with BPC4 but bpc seems to be frozen
during transfer.

I'm stracing both processes (the main bpc process and both transfer
process) but nothing
happens from about 3 hours.

Any clue?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BPC4: shared files from pool and compression

2017-09-18 Thread Gandalf Corvotempesta
Hi to all.
I'm testing BPC4 by backing up a couple of hosts.
These hosts are mostly identical, thus I would expect that most of the files
are shared between the pool.

How can I check this ? In example, /bin/ls should be the same on all hosts,
how can I check if /bin/ls is seen from the pool or if BPC is
transferring it again?

Second question: by disabling compression (i'm using ZFS with native
compression),
and I don't want to resync everything again by disabling the
compression, what if I set
it to "1" ? It should be very very fast and less CPU-intesive than 3
(default), right ?

Any way to remove compression automatically from the whole pool
without resync everything?
Our first sync requires 1 or 2 days (on certain servers) to be
completed, I don't want to start
backups from scratch.

May be something like this proof-of-concept ?

find /var/backups/backuppc/cpool -type f - exec gunzip {} \;

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email in backup success/failure

2017-09-03 Thread Gandalf Corvotempesta
That's fine, so I'll able to configure bpc to send email every time a
backup is failed for an host?
if yes, how?

Il 3 set 2017 9:45 PM, "Alexander Moisseev via BackupPC-users" <
backuppc-users@lists.sourceforge.net> ha scritto:

> On 9/3/2017 10:20 PM, Gandalf Corvotempesta wrote:
>
>> So, what's the meaning for the email feature configurable in bpc settings?
>>
>> Which kind of emails are sent?
>>
>> BackupPC sends notifications if a host has never been backed up or the
> most recent backup is too old.
> That means you will get an email if backup has never been finished in the
> configured number of days, but BackupPC remains silent if backup was
> finished with a lot of errors.
> Some people thinks it's a good thing that BackupPC doesn't bother them if
> some files was locked or vanished or whatever during backup.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email in backup success/failure

2017-09-03 Thread Gandalf Corvotempesta
So, what's the meaning for the email feature configurable in bpc settings?

Which kind of emails are sent?

Il 3 set 2017 8:01 PM, "Alexander Moisseev via BackupPC-users" <
backuppc-users@lists.sourceforge.net> ha scritto:

> On 9/3/2017 7:52 PM, Gandalf Corvotempesta wrote:
>
>> Is possible to have, every night, an email with the result of any backups?
>>
>> I would like to get a report every night, after backup completion
>>
>>
> There is no such functionality in BackupPC, but you can run external
> script by cron during blackout period.
>
> https://github.com/moisseev/BackupPC_report
> https://github.com/moisseev/BackupPC_report/blob/master/BackupPC_report
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Email in backup success/failure

2017-09-03 Thread Gandalf Corvotempesta
Is possible to have, every night, an email with the result of any backups?

I would like to get a report every night, after backup completion
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Gandalf Corvotempesta
2017-09-01 18:50 GMT+02:00 Les Mikesell :
> Large, changing files can be a problem, but log files tend to be
> highly compressible.

rotated log files are already compressed by the client.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Gandalf Corvotempesta
2017-09-01 18:29 GMT+02:00 Les Mikesell :
> Unless you have a huge turnover in data, keeping more backups will not
> take a lot more space on the server.  There is only one copy kept of
> each unique file, no matter how many backups you keep.  And, since it
> is compressed it will take less space than the original copy.
> That's kind of the point of using backuppc.

This is the same with rsnapshot that i'm currently using and i'm
running out of space.
Yes, bpc is able to compress data, but my biggest issue is with log
files, that change every days
and every server is creating about 20-30GB of new log files per day.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Gandalf Corvotempesta
2017-09-01 17:10 GMT+02:00 Ray Frush :
> BackupPC's retention rules are not necessarily the easiest to understand.
> Your proposed schedule would result in having only 7 days of backups, which
> is probably not what you want.

Yes, only 7 days of backups is what I want.
I don't have enough space on this backup server, thus I need to keep a
low retention

Something like your example is OK for me, but with only 7 days of backups.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Gandalf Corvotempesta
2017-08-31 16:33 GMT+02:00 Ray Frush :
> The values you'll want to check:
> $Conf{IncrKeepCnt} = 26;  # This is the number of total 'unfilled'
> backups kept.
>
> $Conf{FillCycle} = 7;# This is how often a filled backup is kept (1 per
> week) which strongly influences the next setting
>
> $Conf{FullKeepCnt} = [  4,  3,  0, ];  # This defines how many 'filled'
> backups are retained.
>
> The combination of filled and unfilled backups result in ~32 days of daily
> backups plus a couple of older ones just in case a user needs a file from
> more than a month ago.

So, to archieve a 7 days of daily backups with 1 filled backup every 4
months, the following would be ok ?

$Conf{IncrKeepCnt} = 7; # 7 days of incrementals
$Conf{FillCycle} = 120; # 1 filled every 120 days (4 months)
$Conf{FullKeepCnt} = [ 0 ]; # keep only the latest full

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 18:51 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Yes, on the run everyhting missing is synced. But what about a restore?

*on the NEXT run

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 18:44 GMT+02:00 Les Mikesell :
> With rsync xfers, only the changes are going to be transferred.  The
> difference in a backuppc full and incremental is that the incremental
> will use the rsync feature of comparing the timestamp and length of
> the files to quickly skip unchanged files, where a full run will do a
> full read of all files on the target host for a block checksum
> comparison with the old copy.   If you use checksum-caching, the
> backuppc side will store those on the 2nd full run and not have to
> uncompress and compare for the third and subsequent full runs -
> however the client side always does a full read so fulls take more
> time but not a lot more bandwidth.   Bpc3 required the old matching
> file to have been in the same location on the same host to avoid
> transferring again.  Bpc4 is supposed to be able to identify matching
> files out of the pool if they have been renamed or you already have a
> copy from another host.   So if that "new" 5 GB was copied from
> somewhere that was already backed up, you would not need to transfer
> anything again.

Yes, now it's clear.
But my issue is not bandwidth but time. A longer backup will increase
load on the
host for more time.

So, a filled backup is only needed to prevent bit-rot or something
similiar, right ?
If my filesystem is ZFS, can I safely use a single filled backup for
many months?
There is no need to compare checksum in bpc, because ZFS already does this.

> "Filled" backups don't take a lot more space, just more time to build
> the directory structure.  If you are concerned about this, keep more
> filled copies.  In any case the next run will copy in anything
> missing.

Yes, on the run everyhting missing is synced. But what about a restore?
If "file1" was created in the filled backup (now missing) and
untouched in the subsequent incremental backup
(thus it was not transferred), loosing the filled backup means to
loose the "file1" ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 18:34 GMT+02:00 Ray Frush :
> I'll extend the example
>
> Day 0  :  Full backup 100GB transfered
> Day 1  : add 5GB ,   Incremental runs, 5GB transferred
> Day 2 : add 5GB  ,   Incremental, ~5GB transferred
> Day 3 ; add 5 GB,   Full runs.   ALL files check-summed.  Files with
> identical checksum are skipped, new/changed files transferred:  ~ 5GB
>You do incur extra TIME related to checksumming all the files, but
> you only transfer what's changed/new

Perfect.

> BackupPC requires a minimum of one filled backup.   If you gracefully delete
> a filled backup, I believe that BackupPC does intelligently fill the next
> backup.   If you were to delete the filled backup from the filesystem
> directly, It is simply missing, and BackupPC would have to build a restore
> tree referencing all of the available incrementals back to the most recent
> available filled backup.

Ok but let's simulate a crash in your example:

On day 2, before the incremental backup, the filled one (day0) is lost.
Is backup made on day1 still available with "all" files or only with
the latest 5GB added between day0 and day1 ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 17:54 GMT+02:00 Les Mikesell :
> I guess I'm missing why you would ever want to delete anything
> manually.  With bpc the actual files are going to be in the pool
> anyway and you almost certainly don't want to delete anything manually
> from there because you'd lose things that are pooled from other hosts.
>   In any case, though, if the next (rsync) run does not find an
> existing copy it should fill it back in.  Tar/smb backups would take a
> full run to recover since they only transfer new files by timestamp on
> incrementals.

Obviously i don't want to delete anything. It's just a safety reason.
I'll repeat my experience with Bacula: with Bacula you have Full,
Incrementals and Differentials.
Incrementals are made from the latest Full and they only store changed files.

When you have to restore an host, you need the latest Full and any
following Incrementals.
Now let's assume a scheduling with 1 full per month and 30 days of incrementals.
If you are at 29th of the month and, for whatever reason, you loose
the Full (unclean shutdown and so on)
you have lost the whole month of backup.

I've learnt this the hardway where an unclean shutdown in the server
room corrupted the Full file the last day of the month.
The same unclean shutdown crashed a server that I had to restore. The
broken server was the affected by the lost backup in bacula.
Thus, a single failure affected a month full of backups.

With rsnapshot this can't happen. If you loose a single file from any
backup point, that file is still available in any other backup point
(obviously, it must be present)
Any backup is totally unrelated to each other thanks to hardlinking.
You can make any kind of mess with any backup point and nothing will
break (except the single backup point that your are messign whith)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 17:32 GMT+02:00 Ray Frush :
> With BackupPC 4.x  we only take a 'full' every 90 days, and because we're
> using rsync, subsequent fulls aren't as painful as the first one.  We run
> the full to ensure that all checksums match to avoid silent data corruption
> on th storage

So, with a "full" run, the second "full" is still seen as an
incremental by rsync?
Let's assume a 100GB host.
bpc will backup that host for the first time. 100GB are transferred.
The next day, only 5GB are added on that host. i'll force bpc to make
a "full/filled" backup.
How many GB are transferred, 105GB or only 5GB ?

> With BackupPC 4.x  if you delete a 'filled' backup (why would you do that
> anyway?)  It just makes BackupPC work harder since it has to rebuild
> references back to an older filled backup, which cost time while doing a
> restore.  So you'll only lose the single day that you delete.

And what If I don't have any other filled backup but only incrementals
made from the deleted "filled" ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 17:18 GMT+02:00 Les Mikesell :
> Also, note that backuppc's compression and pooling across host will
> likely at least double the history you can keep online unless your
> data is mostly unique and already compressed.

This is not an issue for me.
My biggest concern is how bpc handle a broken backup point. I had
multiple issues in the past
with bacula, where if you loose/corrupt the full backup, every other
following backup is gone.

rsnapshot solved this flawlessly, but doesn't offer any compression
feature and doesn't have any useful web interface.

I would like to use BPC (i've used v3 many years ago with success
,tried v4 last year and it was a total mess due to a bug now fixed)
but the ability to delete (brutally, from command line, not from BPC)
and backup point is mandatory for me.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
2017-08-31 16:33 GMT+02:00 Ray Frush :
> BackupPC, is relatively easy to setup for a schedule like you propose.   We
> keep a 30 day backup history with a few extra weeks tacked on to get out to
> ~70 days, so the values below reflect our schedule:
>
> The values you'll want to check:
> $Conf{IncrKeepCnt} = 26;  # This is the number of total 'unfilled'
> backups kept.
>
> $Conf{FillCycle} = 7;# This is how often a filled backup is kept (1 per
> week) which strongly influences the next setting
>
> $Conf{FullKeepCnt} = [  4,  3,  0, ];  # This defines how many 'filled'
> backups are retained.
>
> The combination of filled and unfilled backups result in ~32 days of daily
> backups plus a couple of older ones just in case a user needs a file from
> more than a month ago.

Thanks for the reply.
In this case, you are making some full backups.
I don't want to run any full backup except for the first one, like
with rsnapshot.
With rsnapshot, only incrementals are made. I don't want to run full
backups because
I have some very very huge servers that will took 2 or 3 days to
transfer everything, but only
a couple of hours to transfer the changed files during an incremental run.

> To answer your second question:  BackupPC does a good job of managing the
> 'filled' (think 'full') backups if you decide to delete one.   I have found,
> that BackupPC is pretty good at self healing from issues.  We had a number
> of backups impacted by running out of inodes during a cycle.   While the
> files lost by the lack of Inodes cannot be recovered, BackupPC recovered
> gracefully on the next cycle after the file system was expanded.

So, what happens if I delete the filled backup ? I'll only loose that
single backup point or even some subsequent incrementals are lost
because some files was located to the "filled" backup ?

rsnapshot make uses of hardlinks, thus, the only way to loose a file
is to loose all hardlinks pointing to that file.
on the first run, all files are transfered. On following runs
(incrementals) only changed files are transfered, everything else is
hardlinked to the first backup. If you loose the "first" backup, the
hardlink is still resolved.

With BPC is the same ?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Gandalf Corvotempesta
Additionally, what happens if I delete/lost/break the full backups ?
Any subsequent incremental backups will be broken or automatically the
following incremental backup would become a "full" like with
rsnapshot?

2017-08-30 21:54 GMT+02:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Hi to all
> Currently I use rsnapshot with success to backup about 20 hosts
>
> Our configuration is simple: every night I'll start 4 concurrent backups
> keeping at least 10 days of old backups
>
> In this way, due to rsnapshot hardlinks , I'm able to restore any file up to
> 10 days ago or to keep the backup chain consistent even by deleting multiple
> backup trees at all.
>
> How can I get the same with bpc 4?
> Last time I've tried I had difficult to understand filled/unfilled backups
> and retention times
>
> Any help ? I would like to move to bpc due to it's deduplication and
> compression feature but keeping the ability to destroy backup trees without
> compromising the whole host backup is mandatory
>
> (In other words, with Bacula if you loose the full backup, you also loose
> the whole backup chain, with rsnapshot there is no full, if you loose a
> backup point, you'll only loose that backup point

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Scheduling advice

2017-08-30 Thread Gandalf Corvotempesta
Hi to all
Currently I use rsnapshot with success to backup about 20 hosts

Our configuration is simple: every night I'll start 4 concurrent backups
keeping at least 10 days of old backups

In this way, due to rsnapshot hardlinks , I'm able to restore any file up
to 10 days ago or to keep the backup chain consistent even by deleting
multiple backup trees at all.

How can I get the same with bpc 4?
Last time I've tried I had difficult to understand filled/unfilled backups
and retention times

Any help ? I would like to move to bpc due to it's deduplication and
compression feature but keeping the ability to destroy backup trees without
compromising the whole host backup is mandatory

(In other words, with Bacula if you loose the full backup, you also loose
the whole backup chain, with rsnapshot there is no full, if you loose a
backup point, you'll only loose that backup point
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Gandalf Corvotempesta
2016-11-01 13:07 GMT+01:00 Adam Goryachev :
> Ummm, silly question, but why would you want to delete a backup?

I don't want to delete a backups, but when things gonna bad, this could happen.
I had some issue with Bacula where a full backup was
corrupted/unreadable and I've
lost all backups for that server due to a missing single backup.

> BackupPC supports automatic removal of old backups based on the schedule
> you provide, you shouldn't be manually messing with the backups. If you
> need a different schedule, then adjust the config, and let backuppc
> handle it for you.
>
> So, can you explain the need to delete random backups manually?
> Generally, if you need to do something weird like that, then either you
> are doing something wrong, or you are using the wrong tool.

Read above.
It's just for safety

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Gandalf Corvotempesta
2016-11-01 11:35 GMT+01:00 Johan Ehnberg :
> Changes in BackupPC 4 are especially geared towards allowing very high
> full periods. The most recent backup being always filled (as opposed to
> rsnapshots hardlink pointing to the first), a full backup is not
> required to maintain a recent and complete representation of all the
> files and folders.

So, with the current v4, deleting a full backup doesn't break the
following incrementals?
In example, with Bacula, if you delete a "full" backup, all following
backups are lost.
In rsnapshot, you can delete whatever you want, it doesn't break
anything as long as you keep at least 1 backup, obviosuly

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backups

2016-11-01 Thread Gandalf Corvotempesta
2016-11-01 9:26 GMT+01:00 Adam Goryachev :
> Easy, configure your keepfull/incremental and your full/incremental
> periods so that they will match your desired retention periods for your
> needs.
>
> Make sure you use rsync (ie, same as rsnapshot).
>
> Use checksum caching
>
> Then, after your second full backup, you see the time to complete a
> backup is similar to an incremental.

Any drawback doing this like rsnapshot ?
What happens if I delete the "full" backup ?
With rsnapshot nothing happens, by using hardlinks, the only way to
loose backus (and data)
is to delete all hardlinks pointing to a file, so I have to delete the
whole backup pool

What about backuppc ? a new full is made if the previous full is lost
or the first "useful" backup is promoted to full ?

I'm asking this because based on the answer I can try to get the best
config for my environment. If BPC is smart enough
to work even without the original full, I can use a full period very
very high (like 1 year)

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Incremental backups

2016-11-01 Thread Gandalf Corvotempesta
One of the biggest advantage by using rsnapshot (in my own envirnment)
is the absence of "backup levels". There isn't any "full" or
"incremental", all backups are incremental and common files are
hardlinked back to the first backup.

This is very usefull for us, as we can delete every snapshot or pool
and still have the full backup available. Let me try to explain
better:

10 days ago i've started a new backup. It was a "full" dump because
was the first backup for that host.
After that, all following backups are made incrementally and
hardlinked back to the first.
by using hardlinks, I can delete EVERY backup, even the first, and
still have the full backup available. so, having:

daily.0
daily.1
daily.2
daily.3
daily.4

I can do the following with no issue:
"rm -rf daily.1 daily.2 daily.3 daily.3"
to remove everything except the last made backup and still have the
full backup available in daily.0

How can I accomplish this with backuppc 4 ? I don't want to create
"full" backups, as this woìuld require 2 days for some servers, except
for the very fist backup for that host or if the full backup is
missing

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-29 Thread Gandalf Corvotempesta
Il 28 ott 2016 19:56, "Nick Bright"  ha scritto:
>> I think v4 has had enough real world testing where it can be disabled.
See Craig's comment:
>> https://github.com/backuppc/backuppc/issues/4
>>

So someone should post a pull request to remove that forced fsck
i have no idea on which code to remove
--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-28 Thread Gandalf Corvotempesta
2016-10-28 17:36 GMT+02:00 Alain Mouette :
> Please, I went reading about rsnapshot and it also makes extensive use of
> hard-links, does it perform differently than BackupPC about this?

rsnapshot is much faster than BackupPC because it hasn't to do any
check in the pool
No deduplication or compression, thus it will run a plain rsync at
maximum speed.


> And what file system were you having trouble with? Was it Ext4 as seems to
> be recommended?

I've tried both Ext4 and XFS. My issue wasn't related to the
filesystem. It's BPC4 that is forcing a full fsck after each backup.

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Version 4 vs 3

2016-10-28 Thread Gandalf Corvotempesta
Il 28 ott 2016 01:04, "Adam Goryachev" 
ha scritto:
> Doing work recently (adding new hosts) I realised that performance on v4
> is hit hard because of a couple of "bugs" (undeveloped sharp edges)
> which makes it do a full fsck on all existing backups after every new
> backup (or partial), and if you have a large number of backups, and/or a
> lot of files on the machines, then this will cause some significant drop
> in performance.
>

This is why I've trashed away backuppc by replacing it with plain rsnapshot

There was no way to backup 250GB hosts with 4 or 5 millions of files due to
the forced fsck almost every time
--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-02-10 Thread Gandalf Corvotempesta
2016-02-10 16:34 GMT+01:00 Les Mikesell :
> If you want to keep more than one full shouldn't FullKeepCnt and maybe
> FullKeepMin be higher?

As wrote many, many, many, many times before: IT DOESN'T WORK.
I'm trying any possible combination with settings trying to get at
least 1 backup per day
but is impossible, every time BPC starts a new full backup (and is
running a full every time),
the older one is removed, thus i'm loosing days.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-02-10 Thread Gandalf Corvotempesta
2016-02-07 17:25 GMT+01:00 Les Mikesell :
> How many filled backups are you configured to keep?  And have you
> adjusted the FillCycle value accordingly?  According to the docs, if
> there aren't any filled backups other than the most recent, then
> FillKeepPeriod won't have any effect.   It looks like if you adjust
> the Full/incremental schedules out of sync with default that gives v3
> behavior you have to tweak the FillCycle to create the filled copies
> you want to keep.

I've tried in any possible combinations. It doesn't work. When removing a filled
backup, the whole backup is removed and this is *TOTALLY WRONG* bacause
we loose recovery points.

actually i'm trying with FillCycle = 7, Full min = 1, full frequency 6.97

Let's see, but biggest issue is the backup removal.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-02-07 Thread Gandalf Corvotempesta
2016-01-31 10:01 GMT+01:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> I have some updates.
> Now seems to working properly, no more deleted backups. I have #1, #3,
> #4 .. #8
> So, the only missing backups are #0 and #2
> #1 and #3 are both incremental unfilled, #4 is a filled full and from
> #5 to #7 are incremental unfilled.
> #8 (today) is incremental unfilled as it's the last backup.

Still doesn't work.
Every time BPC is removing a filled backup, the *whole* backup is
removed, thus i'm still having
many unavailable backup points. For example, now i'm missing #0, #2,
#4, #8, #12, #15

As i'm backing up once a day, having 6 unavailable backups meas that I
have 6 days unavailable for restore.

I've tried to change full backup mins and full backup counts in almost
every possible combinations, nothing change,
simply, it doesn't work.

What I' would like to archieve is 1 full every 15 days, 1 incremental
every other days, keeping 2 fulls and 30 incrementals
for example:

1 full on February 1, many incrementals from February 2 to February
14, 1 full on February 15, many incrementals from February 16 up to
February 29

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-31 Thread Gandalf Corvotempesta
2016-01-27 20:56 GMT+01:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> Actually is removing the WHOLE backup, making some days unavailable to 
> restore.

I have some updates.
Now seems to working properly, no more deleted backups. I have #1, #3,
#4 .. #8
So, the only missing backups are #0 and #2
#1 and #3 are both incremental unfilled, #4 is a filled full and from
#5 to #7 are incremental unfilled.
#8 (today) is incremental unfilled as it's the last backup.

Thhis seems to be correct, right?

2 things are still not working:

1) fsck is still run for every backups, after every backups. Not only
after a failure like I thought before
2) graphs in home are not generated properly. I have rrdtools
installed and BPC is running it, but not graph are plotted.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-28 Thread Gandalf Corvotempesta
2016-01-28 17:42 GMT+01:00 Les Mikesell :
> on the other hand there is
> difference between '6.97' and 7.97.   If that isn't a typo, something
> odd happened.

That's ok, i've changed the config through admin panel to change
FullPeriod from 7.97 to 6.97 that's why I saw the numbers saved as
strings.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-28 Thread Gandalf Corvotempesta
2016-01-27 23:24 GMT+01:00 Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com>:
> I had this:
>
> $Conf{FullPeriod} = 27.97;
>
> now changed to
>
> $Conf{FullPeriod} = 7.97;
>
> just in case bpc was parsing only the first digit.

I've also noticed that original configuration is using float values,
but when saving from webinterfaces, all is converted to strings:

Config saved by web interface:
# grep '$Conf{FullPeriod}' config.pl
$Conf{FullPeriod} = '6.97';

Original config, prior saving from web.
# grep '$Conf{FullPeriod}' config.pl.old
$Conf{FullPeriod} = 7.97;


In the first case, FullPeriod is a string, in the second, is a float.
Now i'm trying with the "string" version. let's see

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-28 Thread Gandalf Corvotempesta
2016-01-28 16:03 GMT+01:00 Bowie Bailey :
> This is Perl.  There is no real distinction between strings, integers,
> and floats.  7.97, '7.97', and "7.97" (along with a few other more
> obscure variants) will all be interpreted the same.

Thanks for clarification.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-27 Thread Gandalf Corvotempesta
2016-01-27 22:50 GMT+01:00 Les Mikesell :
> And yet, other people who haven't made your changes don't see that
> issue.  I don't know why it would happen but I wonder if your
> FullPeriod is somehow getting parsed as 2.

I had this:

$Conf{FullPeriod} = 27.97;

now changed to

$Conf{FullPeriod} = 7.97;

just in case bpc was parsing only the first digit.

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   >