Re: [BackupPC-users] How to do a initial seed on pool

2022-06-03 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 3 Jun 2022, Bruno Rogerio Fernandes wrote:


... about 15TB of data ...  planning to migrate to backuppc V4 ...
backup is done every day through an Internet connection and would be
pretty dangerous waiting for many weeks to do a first full backup
(wait for transfers to complete through Internet - about 100Mbps
client uplink speed).


Without more information about your data it's hard to be sure, but I'm
not convinced you've yet made the case for employing BackupPC.  In any
case I make it only about two weeks to transfer your 15TBytes of data
at 100MBit/s, even if it's uncompressed.

Is there nothing useful to be gained by compressing the data?


... I don't have enough space to accommodate two backups ...


Buy more space.   Compared with most data, it's very cheap.


So, I'm wondering, is it ok to seed the backuppc pool manually? ...
If I ... could ... backuppc won't transfer this file over Internet,
so solving my issues.
...
Are there any other things that I'm missing? Is it ok to do that?


Don't do it.  You will almost certainly create issues which few here
will be able to help you with.  It could easily then take longer than
two weeks to sort out the resulting mess.  Why not just start the new
backup now?

If you value your data more than the cost of more storage, you have no
excuse for not buying more storage.  It sounds to me like you already
don't have enough, because most people who are serious about backups
will have a minimum of three copies of the data - often kept at three
different locations on the planet.  Until a few months ago I wasn't so
worried about the planet, but it's probably worth bearing in mind that
it's the only one we have.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Wed, Jun 1, 2022 at 11:09 PM  wrote:
>
> Sharuzzaman Ahmat Raslan wrote at about 14:46:52 +0800 on Wednesday, June 1, 
> 2022:
>  > Hello,
>  >
>  > I have been using BackupPC for a long time, and even implement it
>  > successfully for several clients.
>  >
>  > Recently I came across several articles about the 3-2-1 backup
>  > strategy and tried to rethink my previous implementation and how to
>  > achieve it with BackupPC
>  >
>  > For anyone who is not familiar with the 3-2-1 backup strategy, the
>  > idea is you should have 3 copies of backups, 2 copies locally on
>  > different media or servers, and 1 copy remotely on cloud or remote
>  > server
>  >
>  > I have previously implemented BackupPC + NAS, where I create a Bash
>  > script to copy the backup data into NAS. That should fulfil the 2
>  > local backup requirements, and I could extend it further by having
>  > another Bash script copying from the NAS to cloud storage (eg. S3
>  > bucket)
>  >
>  > My concern right now is the experience is not seamless for the user,
>  > and they have no indicator/report about the status of the backup
>  > inside the NAS and also in the S3 bucket.
>  >
>  > Restoring from NAS and S3 is also manual and is not simple for the user.
>  >
>  > Anyone has come across a similar implementation for the 3-2-1 backup
>  > strategy using BackupPC?
>  >
> Sounds interesting...
>  > Is there any plan from the developers to expand BackupPC to cover this 
> strategy?
>
> I don't think this is on the roadmap...
> But it is open source and easily extendable given that the code is
> mostly perl.
> Feel free to add this!
>
I'm not well versed in Perl, but I can try to create a POC in Python,
trying to get the spirit of BackupPC as much as possible



-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Wed, Jun 1, 2022 at 11:32 PM Ray Frush  wrote:
>
> I have always interpreted the 3-2-1 strategy to apply to copies of your data, 
> not the number of backups 
> (https://www.backblaze.com/blog/the-3-2-1-backup-strategy/)
>
> As such, I’ve used two strategies over time.
> 1)  Use BackupPC to backup local devices in the same building/LAN, and have a 
> second BackupPC instance in a separate space also running backups of the same 
> devices.  ( 3 copies of the data: the source, one on local backup, one on 
> remote backup.  Requires good network speeds between your local site and your 
> remote site.
>
> 2) Use BacukpPC to backup local devices to a NAS.  Use NAS replication to 
> push a copy of the BackupPC data to a remote device.
>

Strategy no 1 of running 2 BackupPC systems is interesting. I will run
some tests to figure out if our ISP upload bandwidth of just 10 Mbps
(for a 30 Mbps fibre subscription) is good enough to run a BackupPC
system in the cloud

-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Sharuzzaman Ahmat Raslan
On Thu, Jun 2, 2022 at 2:29 AM Libor Klepáč  wrote:
>
> Hi,
> we use backuppc in containers (systemd-nspawn), each instance on
> separate btrfs drive.
> Then we do snapshots of said drives using btrbk.
> We pull those snapshots from remote machines, also using btrbk.
>
> If we need to spin up container in remote location (we have longer
> retention in remote location), we just create read-write copy of
> snapshot and spin it up to extract files.
>
> With backuppc4, we also tried to use btrfs compression using zstd,
> instead of backuppc internal compression (you don't need compression,
> because you don't use checksum-seed anymore).
> Seems to work nice too.
>
>
> Libor
>

Interesting implementation.

How do you manage the configuration files? Is it inside the snapshot
as well? You launch a new container on the remote location and it
reads the configuration from the snapshot?

If you have documented this implementation in some blog or Medium, I'm
interested to read more about it.


-- 
Sharuzzaman Ahmat Raslan


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-03 Thread Libor Klepáč
Hi,
we have one subvolume for system and one for backuppc data on each btrfs 
container. (not important to your question)

We also run apache as proxy server on the VM running those containers.
So you access
https://primarybackupserver/backuppc/customer1
https://primarybackupserver/backuppc/customer2
and it's proxied to one of containers (each container runs its own copy of 
apache with cgi-bin for backuppc).

On secondary backup server, we have the same private network and apache as 
proxy, so you can access containers using
https://secondarybackupserver/backuppc/customer1
https://secondarybackupserver/backuppc/customer2

So there is no need to change any settings in container, just convert one of 
readonly snapshots (actually two - system and backuppc data) to read-write 
subvolume and spin up the container.

Libor

On Pá, 2022-06-03 at 23:29 +0800, Sharuzzaman Ahmat Raslan wrote:
On Thu, Jun 2, 2022 at 2:29 AM Libor Klepáč 
mailto:libor.kle...@bcom.cz>> wrote:

Hi,
we use backuppc in containers (systemd-nspawn), each instance on
separate btrfs drive.
Then we do snapshots of said drives using btrbk.
We pull those snapshots from remote machines, also using btrbk.

If we need to spin up container in remote location (we have longer
retention in remote location), we just create read-write copy of
snapshot and spin it up to extract files.

With backuppc4, we also tried to use btrfs compression using zstd,
instead of backuppc internal compression (you don't need compression,
because you don't use checksum-seed anymore).
Seems to work nice too.


Libor


Interesting implementation.

How do you manage the configuration files? Is it inside the snapshot
as well? You launch a new container on the remote location and it
reads the configuration from the snapshot?

If you have documented this implementation in some blog or Medium, I'm
interested to read more about it.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/