Re: [BackupPC-users] Slow local backup

2018-06-16 Thread ED Fochler
> On 2018, Jun 15, at 9:06 AM, Bowie Bailey  wrote:
> 
> The CPUs were not busy.  That's what I was confused about.  I would have
> expected to see a bottleneck at some point, but nothing seemed to be
> busy.  The CPUs were all at or below 20% and iowait was close to 0 most
> of the time.  I'm not sure how I would determine if the loopback was
> saturated.

20% loaded with 6 or more cores would be a single thread running full speed on 
one core.  A single task may not stay on one core, it may get moved around 
enough that it doesn't appear that any of the cores are heavily loaded. Doubly 
true if you have hyperthreading which causes additional CPU shuffling.

I think you ran into single thread compression performance, plus any of the 
normal process, network, and IO latency that would compound with everything 
being on one machine.  That sounds about right to me.  First backups are slow.  
SSH probably wouldn't have made any difference at that speed.

ED.



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-14 Thread ED Fochler
I can answer the rsync compression question.  no.  Running gzip'd data through 
gzip is a waste of CPU power.  Depending on your link and CPU speed it may even 
slow down your ability to transfer data.

As for the recovery from an rsync'd backup...
If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks to 
other locations, you can easily shut down BackupPC, swap links, and start it 
up.  So long as both systems are running the same version, it should come up 
cleanly.

I gave up backing up the backup server though.  If you want proper redundancy 
you run backups in parallel, not in a chain.  If one backup server has access 
to the other backup server, then it has the potential (if compromised) to 
destroy all of your backups and originals from one location.  Redundant backups 
should live in separate private enclaves.

ED.



> On 2018, Oct 13, at 8:52 PM, Mike Hughes  wrote:
> 
> Another related question: Does it make sense to use rsync's compression when 
> transferring cpool? If that data is already compressed, am I gaining much by 
> having rsync try to compress it again?
> Thanks!
> From: Mike Hughes 
> Sent: Friday, October 12, 2018 8:25 AM
> To: General list for user discussion, questions and support
> Cc: Craig Barratt
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>  
> Cool, thanks for the idea Craig. So that will provide a backup of the entire 
> cpool and associated metadata necessary to rebuild hosts in the event of a 
> site loss, but what would that process look like?
>  
> Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
> would the recovery process look like? From what I’m thinking I’d have to 
> rsync the entire folder back to the destination site, do a fresh install of 
> BackupPC and associate it with this new folder. Is that about right? Would 
> there not be a method to extract an important bit of data from the cpool 
> without performing an entire site restore? I’m considering the situation 
> where I have data of separate priority. That one cpool might contain several 
> TB of files along with a few important servers of higher priority. The only 
> option looks like a full site restore after rsyncing everything back. Am I 
> thinking of this correctly?
>  
> From: Craig Barratt via BackupPC-users  
> Sent: Thursday, October 11, 2018 20:01
> To: General list for user discussion, questions and support 
> 
> Cc: Craig Barratt 
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>  
> I'd recommend just using rsync if you want to make a remote copy of the 
> cpool, pc and conf directories, to a place that BackupPC doesn't back up.
>  
> Craig
>  
> On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes  wrote:
> Hi BackupPC users,
> 
> Similar questions have come up a few times but I have not found anything 
> relating to running multiple pools. Here's our setup:
> - On-prem dev servers backed up locally to BackupPC (4.x)
> - Prod servers backed up in the cloud to a separate BackupPC (4.x) instance
> 
> I'd like to provide disaster recovery options by syncing the dedup'd pools 
> from on-prem to cloud and vice-versa but this would create an infinite loop. 
> Is it possible to place the off-site data into a separate cpool which I could 
> exclude from the sync? It would also be nice to be able to extract files from 
> the synced pool individually without having to pull down the whole cpool and 
> reproducing the entire BackupPC server.
> 
> How do others manage on-prem and off-site backup synchronization? 
> Thanks,
> Mike
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-21 Thread ED Fochler
> aren't you increasing the exposure of your production system X2 by giving 
> another backup process access to it?

Yes.  And it's the right thing to do.  Because a production failure with rapid 
recovery is manageably bad.  Having your production and backups encrypted by 
ransomware is a business-ending catastrophe.  I have an explanation, but if 
that much makes sense to you, you don't need to read on.

ED.


Redundant systems generally increase the likelihood of nuisance failure, but 
decrease the likelihood of catastrophic failure.  This case is no different.  
By having two separate backup servers in different locations, maybe with 
different admins, you are exposing the primary machines to double the risk by 
having 2 independent methods of access.  Assuming your risk was near zero, 
doubling it shouldn't be so bad.  So yeah, there's a greater risk of potential 
disruption by having multiple methods of access.  x2.  Also x2 network 
bandwidth.

Assuming the risk of having your backup server compromised is near (but not 
quite) zero, then you are looking at a non-zero chance of everything you care 
about getting mangled by a malicious entity who happened to crack a single 
machine.  That's a non-zero chance at total, business-ending failure.  Having a 
separate backup enclave means that killing production and backups 
simultaneously would require 2 near-zero possibility hacks occurring in rapid 
succession.  0.0001^2

So the risk of simple failure, with reasonable recovery is twice as likely.  
But the probability of production and backups getting destroyed at once goes 
down exponentially.  Other solutions that are similarly over-cautious in 
industry include tape backups going into cold storage, mirrored raid sets with 
drives that get pulled and stored in safety deposit boxes, etc.  It may be 
overkill, and that's your call.  I will continue to suggest it though.  Hacking 
and ransomware are growing problems.  Single backup solutions guard well 
against accidents and hardware failure.  To guard against mischief and 
corruption, you want two, and you want them isolated from each other.  Perhaps 
from different vendors or using different technologies.

Thank you for reading.  I am recovering from back surgery and find 
myself with more free time than usual.  :-)

Ed the long-winded self important explainer and promoter of security 
practices.


> On 2018, Oct 14, at 12:02 PM, Mike Hughes  wrote:
> 
> Thanks for the information Ed. I figured I could leave the '-z' off the rsync 
> command.
> Regarding parallel backups: I see your point of chains exposing the potential 
> to nuke all backups but aren't you increasing the exposure of your production 
> system X2 by giving another backup process access to it? Just curious on your 
> thoughts on that since you seem to have been down this road.
> From: ED Fochler 
> Sent: Sunday, October 14, 2018 10:23:13 AM
> To: General list for user discussion, questions and support
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>  
> I can answer the rsync compression question.  no.  Running gzip'd data 
> through gzip is a waste of CPU power.  Depending on your link and CPU speed 
> it may even slow down your ability to transfer data.
> 
> As for the recovery from an rsync'd backup...
> If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks 
> to other locations, you can easily shut down BackupPC, swap links, and start 
> it up.  So long as both systems are running the same version, it should come 
> up cleanly.
> 
> I gave up backing up the backup server though.  If you want proper redundancy 
> you run backups in parallel, not in a chain.  If one backup server has access 
> to the other backup server, then it has the potential (if compromised) to 
> destroy all of your backups and originals from one location.  Redundant 
> backups should live in separate private enclaves.
> 
> ED.
> 
> 
> 
> > On 2018, Oct 13, at 8:52 PM, Mike Hughes  wrote:
> > 
> > Another related question: Does it make sense to use rsync's compression 
> > when transferring cpool? If that data is already compressed, am I gaining 
> > much by having rsync try to compress it again?
> > Thanks!
> > From: Mike Hughes 
> > Sent: Friday, October 12, 2018 8:25 AM
> > To: General list for user discussion, questions and support
> > Cc: Craig Barratt
> > Subject: Re: [BackupPC-users] syncing local and cloud backups
> >  
> > Cool, thanks for the idea Craig. So that will provide a backup of the 
> > entire cpool and associated metadata necessary to rebuild hosts in the 
> > event of a site loss, but what would that process look like?
> >  
> > Say I have the entire ‘/e

Re: [BackupPC-users] Q: Issues w/network account for backuppc?

2019-12-05 Thread ED Fochler
I don't understand what the benefit is of running a local service using a 
network account.  The machine should also allow login by admins and grant sudo 
privileges according to network auth, but local services are generally run with 
local accounts to maximize resiliency.  This is especially important for a 
backup server as its primary benefit is to let you recover information when 
something about your machines is NOT running properly.

Creating an unnecessary dependency for a backup server to backup or restore 
would run counter to my DR/CoOp design methodology.  That said, my server is 
integrated into FreeIPA and users log in with their network credentials to view 
their own backups.  Is this the desired outcome?

ED.


> On 2019, Dec 5, at 12:59 AM, Kenneth Porter  wrote:
> 
> On 12/4/2019 7:07 AM, G.W. Haywood via BackupPC-users wrote:
>> 
>> Why does this scenario give me a churning feeling in my stomach? 
> 
> Why should it? Centralized management of credentials is a reasonable thing to 
> do.
> 
> My suspicion is that the BackupPC service is trying to start before the 
> credentials server is up. This is where you need systemd, to make sure the 
> order of startup is obeyed, and to allow dynamic conditions such as a 
> slow-starting credentials server. The default systemd unit file for BackupPC 
> doesn't check for this. You need to extend it to add a dependency on 
> connectivity to the credentials server.
> 
> 
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync vs rsyncd speed for huge number of small files

2020-04-21 Thread ED Fochler
I would expect no difference for small file performance between rsync and 
ssh-rsync.  The ssh overhead on a modern system limits data rate to something 
like 75MB/s, nearly saturating a gigabit link.  It seems you have basic 
filesystem performance issues.  More RAM, larger caches, SSD?  investigate with 
iostat on client or server?

ED.


> On 2020, Apr 21, at 4:31 AM, R.C.  wrote:
> 
> Hi
> 
> What is the expected difference in performance between rsync+shh and rsyncd?
> I would use it over a private LAN, so no concerns about security.
> Currently rsync+ssh is way too slow for a huge number of very small files 
> (about 700K email files in an imap server tree), even without --checksum.
> 
> Thank you
> 
> Raf
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/