Re: [BackupPC-users] extra Pool Size charts in Status screen

2022-11-09 Thread Libor Klepáč
Hi,
good finding, i was wondering, why are there two sets of graphs too.

Can you send bug report to debian?

Thanks,
Libor

From: Paul Fox 
Sent: Tuesday, November 8, 2022 3:12 PM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] extra Pool Size charts in Status screen

I wrote:
 > After upgrading from V3 to V4 (via a system upgrade from Ubuntu 20 to
 > 22) my server status screen now has two copies of the 4 and 52 week
 > pool size charts.  (i.e, 4 charts total.)
...
 > The first images (which are log/poolUsage{4,52}.png) are generated
 > from log/poolUsage.rrd.  (I think so -- at least, all three have
 > identical modtimes).
 >
 > The second set of images are generated (in GeneralInfo.pm) from
 > log/pool.rrd, which in my case is several days old, from before my
 > upgrade to V4.  My suspicion is that this is a stale file, but I also
 > see that there's also code in GeneralInfo.pm to create log/pool.rrd,
 > prior to using it to create the images.
 >
 > So, what's going on?

With further investigation:

It seems that the second pair of graphs are generated by code in
GeneralInfo.pm which is added by the Debian package patches.  In
particular, 01-debian.patch and 06-fix-rrd-graph-permissions.patch.

Unless I'm mistaken, it seems that backuppc V3 didn't provide pool
graphs at all.  The graphs I've been seeing for the last couple of
decades have been created by code added by the Debian packager.

That's great (I like the charts), but now that backuppc V4 is creating
its own pool graphs, perhaps the Debian patches which do so should go
away.

BTW, the code in GeneralInfo.pm (courtesy the debian patches 1 and 6)
generates the graphs on the fly using the date in log/pool.rrd.  I
haven't figured out how pool.rrd ever got updated with pool data in
the first place.  It seems likely that that code is gone already,
because having renamed the pool.rrd a couple of days ago, it hasn't
been recreated.

paul
=--
paul fox, p...@foxharp.boston.ma.us (arlington, ma, where it's 45.1 degrees)



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Recent version of BackupPC_deleteFile

2022-08-19 Thread Libor Klepáč
Hi,
thanks!
I want to delete directories so this is great

Libor

On Pá, 2022-08-19 at 21:05 +0200, Craig Barratt via BackupPC-users wrote:
V4 has a script BackupPC_backupDelete which can delete a particular directory 
in a specific backup and share.  However, it doesn't have granularity down to 
the file level.

Craig

On Fri, Aug 19, 2022 at 1:36 PM Libor Klepáč 
mailto:libor.kle...@bcom.cz>> wrote:
Hi, thanks for answer.

I will have to be patient then :) (wait for useless data to get deleted
by time from pool)

Libor

On St, 2022-08-17 at 09:02 -0400, 
backu...@kosowsky.org<mailto:backu...@kosowsky.org> wrote:
> No - it would require a nearly complete rewrite as v4 uses reverse
> deltas.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Recent version of BackupPC_deleteFile

2022-08-19 Thread Libor Klepáč
Hi, thanks for answer.

I will have to be patient then :) (wait for useless data to get deleted
by time from pool)

Libor

On St, 2022-08-17 at 09:02 -0400, backu...@kosowsky.org wrote:
> No - it would require a nearly complete rewrite as v4 uses reverse
> deltas.
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Recent version of BackupPC_deleteFile

2022-08-17 Thread Libor Klepáč
Hello,
is there a recent version of BackupPC_deleteFile which works with
backuppc4?
I have backuppc4.4.0 on debian and the bundled script does not work
(cannot find include file Attrib.pm)

Or is there another way to (mass) delete files from backup?

Thanks,
Libor


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC failed after upgrading client to Debian 11

2022-07-22 Thread Libor Klepáč
Hi,
we used timeout of 6 ;)

So maybe firewall/router kills your connection when idle. We send
keepalive packets in ssh 

$Conf{RsyncSshArgs} = [
  '-e',
  '$sshPath -o ServerAliveInterval=60 -q -x -l user'
];

Libor

On Pá, 2022-07-22 at 09:33 +, Taste-Of-IT wrote:
> Hi All,
> 
> i have tried --timeout=3600 and i see it in the error log, but it
> seems not to work. Strange is that i get only 14.700 Files, what is
> not quit a lot. Here the errors i got:
> 
> packet_write_wait: Connection to 138.201.128.178 port 22: Broken pipe
> rsync_bpc: connection unexpectedly closed (5033731 bytes received so
> far) [receiver]
> Done: 0 errors, 86 filesExist, 56700 sizeExist, 56430 sizeExistComp,
> 0 filesTotal, 0 sizeTotal, 8 filesNew, 60659120 sizeNew, 2481485
> sizeNewComp, 34193 inode
> rsync_bpc: [generator] write error: Broken pipe (32)
> rsync error: error in rsync protocol data stream (code 12) at
> io.c(226) [receiver=3.1.3beta1]
>     same   recv >f..tpog... rw-r--r-- 5025,    5016  
>     same   recv >f..tpog... rw-r--r-- 5025,    5016   
> DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 14702
> filesTotal, 43683731122 sizeTotal, 1 filesNew, 19 sizeNew, 27
> sizeNewComp, 34536 inode
> rsync error: unexplained error (code 255) at io.c(820)
> [generator=3.1.3beta1]
> rsync_bpc exited with fatal status 255 (65280) (rsync error:
> unexplained error (code 255) at io.c(820) [generator=3.1.3beta1])
> Xfer PIDs are now 
> Got fatal error during xfer (rsync error: unexplained error (code
> 255) at io.c(820) [generator=3.1.3beta1])
> Backup aborted (rsync error: unexplained error (code 255) at
> io.c(820) [generator=3.1.3beta1])
> BackupFailCleanup: nFilesTotal = 14702, type = full, BackupCase = 6,
> inPlace = 1, lastBkupNum = 
> BackupFailCleanup: inPlace with some new files... no cleanup and
> marking partial
> Running BackupPC_refCountUpdate -h xy-f onxy
> Xfer PIDs are now 15479
> BackupPC_refCountUpdate: host xy got 0 errors (took 40 secs)
> Xfer PIDs are now 
> Finished BackupPC_refCountUpdate (running time: 40 sec)
> Xfer PIDs are now 
> 
> 
> Ideas are welcome - thx
> Taste

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC failed after upgrading client to Debian 11

2022-07-20 Thread Libor Klepáč
Hi,
I don't know, if it's still the case, but we used to increase --timeout
parameter of rsync in the past.
Also check, that your firewall does not close connections without
traffic
When rsync searches for changes, there can/could be long periods of
time without any traffic.
That caused timeout or closed connection by firewall

Libor


On Út, 2022-07-19 at 20:21 +, Taste-Of-IT wrote:
> Hi all,
> 
> i had time to make some tests. I run some local tests and figure out,
> that its not an problem with rsync 3.2.3. On local debian 11 systems,
> there was no problem with similar configuration. Because of noticing
> that the runtime with debian 11 seems to be longer than with debian
> 10, i tested a backup with smaler folder and less files, and it
> works.
> 
> But thats the problem. Instead of backing up the whole system at
> once, i tried an alias with a big folder of approx 230gb and 700.000
> files, but this dosnt work. How can i improve that? Making smaller
> backupups isnt an option because administration rises to much.
> 
> thx
> 
> 
> 
> Am 10.07.2022 12:23:58, schrieb Taste-Of-IT:
> > Hi all,
> > 
> > i have latest BackupPC running on Debian 10. I upgraded one system
> > to Debian 11. Backup before running well and without problems.
> > After upgrading the client it fails with these errors:
> > 
> > Got fatal error during xfer (rsync error: unexplained error (code
> > 255) at io.c(820) [generator=3.1.3beta1])
> > Backup aborted (rsync error: unexplained error (code 255) at
> > io.c(820) [generator=3.1.3beta1])
> > 
> > I searched and got different solution. One is the diffent speed of
> > BPC and the clilent, but thats not the case here. Another is the
> > different versions of rsync, which i use. But i didnt find a
> > solution for that.
> > 
> > Has anyone a solution for that?
> > 
> > thx
> > 
> > Taste
> > 
> > 
> > 
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:    https://lists.sourceforge.net/lists/listinfo/backuppc-
> > users
> > Wiki:    https://github.com/backuppc/backuppc/wiki
> > Project: https://backuppc.github.io/backuppc/
> > 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Libor Klepáč
Hi,
we have one subvolume for system and one for backuppc data on each btrfs 
container. (not important to your question)

We also run apache as proxy server on the VM running those containers.
So you access
https://primarybackupserver/backuppc/customer1<https://server/backuppc/customer1>
https://<https://server/backuppc/customer2>primarybackup<https://server/backuppc/customer1>server/backuppc/customer2<https://server/backuppc/customer2>
and it's proxied to one of containers (each container runs its own copy of 
apache with cgi-bin for backuppc).

On secondary backup server, we have the same private network and apache as 
proxy, so you can access containers using
https://secondarybackupserver/backuppc/customer1<https://primarybackupserver/backuppc/customer1>
https://<https://primarybackupserver/backuppc/customer2>secondary<https://primarybackupserver/backuppc/customer1>backupserver/backuppc/customer2<https://primarybackupserver/backuppc/customer2>

So there is no need to change any settings in container, just convert one of 
readonly snapshots (actually two - system and backuppc data) to read-write 
subvolume and spin up the container.

Libor

On Pá, 2022-06-03 at 23:29 +0800, Sharuzzaman Ahmat Raslan wrote:
On Thu, Jun 2, 2022 at 2:29 AM Libor Klepáč 
mailto:libor.kle...@bcom.cz>> wrote:

Hi,
we use backuppc in containers (systemd-nspawn), each instance on
separate btrfs drive.
Then we do snapshots of said drives using btrbk.
We pull those snapshots from remote machines, also using btrbk.

If we need to spin up container in remote location (we have longer
retention in remote location), we just create read-write copy of
snapshot and spin it up to extract files.

With backuppc4, we also tried to use btrfs compression using zstd,
instead of backuppc internal compression (you don't need compression,
because you don't use checksum-seed anymore).
Seems to work nice too.


Libor


Interesting implementation.

How do you manage the configuration files? Is it inside the snapshot
as well? You launch a new container on the remote location and it
reads the configuration from the snapshot?

If you have documented this implementation in some blog or Medium, I'm
interested to read more about it.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-01 Thread Libor Klepáč
Hi,
we use backuppc in containers (systemd-nspawn), each instance on
separate btrfs drive.
Then we do snapshots of said drives using btrbk.
We pull those snapshots from remote machines, also using btrbk.

If we need to spin up container in remote location (we have longer
retention in remote location), we just create read-write copy of
snapshot and spin it up to extract files.

With backuppc4, we also tried to use btrfs compression using zstd,
instead of backuppc internal compression (you don't need compression,
because you don't use checksum-seed anymore).
Seems to work nice too.


Libor



On St, 2022-06-01 at 14:46 +0800, Sharuzzaman Ahmat Raslan wrote:
> Hello,
> 
> I have been using BackupPC for a long time, and even implement it
> successfully for several clients.
> 
> Recently I came across several articles about the 3-2-1 backup
> strategy and tried to rethink my previous implementation and how to
> achieve it with BackupPC
> 
> For anyone who is not familiar with the 3-2-1 backup strategy, the
> idea is you should have 3 copies of backups, 2 copies locally on
> different media or servers, and 1 copy remotely on cloud or remote
> server
> 
> I have previously implemented BackupPC + NAS, where I create a Bash
> script to copy the backup data into NAS. That should fulfil the 2
> local backup requirements, and I could extend it further by having
> another Bash script copying from the NAS to cloud storage (eg. S3
> bucket)
> 
> My concern right now is the experience is not seamless for the user,
> and they have no indicator/report about the status of the backup
> inside the NAS and also in the S3 bucket.
> 
> Restoring from NAS and S3 is also manual and is not simple for the
> user.
> 
> Anyone has come across a similar implementation for the 3-2-1 backup
> strategy using BackupPC?
> 
> Is there any plan from the developers to expand BackupPC to cover
> this strategy?
> 
> Thank you.
> 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Pool v4 and compression and checksum-seed

2022-03-11 Thread Libor Klepáč
Hi, 
I have just updated one of our backuppc container instances to Debian
11, so we upgraded to backuppc4.

I have replaced pool drive with empty disk, i don't want to migrate old
pool to V4 (i keep it on side, so i don't loose old backups).
I would like to use btrfs compression on new pool.
On V3 pool, i needed to use compression when i used checksum-seed, I
believe.
On V4 pool, checksum-seed is not implemented, so i can disable
compression without any concerns?

Thanks,
Libor


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/