Re: [BackupPC-users] rsync_bpc: write failed on "/path/dump.sql": Quota exceeded (122)

2018-10-14 Thread Oliver Lippert
I did that. What I found was missleading (inode limit wrong).

I found the problem: beside the size off the Volume (HDD) I turned on another 
feature of the NAS to Limit the folders size (I wanted to test it and forget 
about that). So while the Volume had enough space, the system did not allow to 
use that o.O



Regards,
Oliver Lippert



Am 14. Okt. 2018, 01:13, um 01:13, Craig Barratt via BackupPC-users 
 schrieb:
>Please Google your os type + "quota".  Here's a tutorial for Ubuntu /
>Debian
>.
>
>Craig
>
>On Sat, Oct 13, 2018 at 3:41 AM Oliver Lippert 
>wrote:
>
>> Hey there,
>>
>> I used to run BackupPC for a while now and since some time I get
>errors in
>> the backups for big files (10GB to 50GB).
>>
>>
>>
>> […]
>>
>> rsync_bpc: write failed on "/path/dump.sql": Quota exceeded (122)
>>
>> […]
>> rsync_bpc: failed to open "/path/mysql/ibdata1", continuing: Quota
>> exceeded (122)
>>
>> […]
>>
>> rsync error: error in file IO (code 11) at receiver.c(391)
>> [receiver=3.0.9.12]
>>
>>
>>
>> I searched in the WEB how to figure out which quota I have to
>configure,
>> but I did not found someone else having this problem.
>>
>>
>>
>> I do run the BackupPC in a docker container on an Synology DS918+.
>There
>> is enough diskspace available.
>>
>>
>>
>> I appreciate any informations / links.
>>
>>
>>
>> --
>>
>> Regards,
>>   Oliver Lippert – Lipperts WEB
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
>
>
>
>
>
>
>
>___
>BackupPC-users mailing list
>BackupPC-users@lists.sourceforge.net
>List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>Wiki:http://backuppc.wiki.sourceforge.net
>Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] syncing local and cloud backups

2018-10-14 Thread Mike Hughes
Thanks for the information Ed. I figured I could leave the '-z' off the rsync 
command.

Regarding parallel backups: I see your point of chains exposing the potential 
to nuke all backups but aren't you increasing the exposure of your production 
system X2 by giving another backup process access to it? Just curious on your 
thoughts on that since you seem to have been down this road.


From: ED Fochler 
Sent: Sunday, October 14, 2018 10:23:13 AM
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] syncing local and cloud backups

I can answer the rsync compression question.  no.  Running gzip'd data through 
gzip is a waste of CPU power.  Depending on your link and CPU speed it may even 
slow down your ability to transfer data.

As for the recovery from an rsync'd backup...
If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks to 
other locations, you can easily shut down BackupPC, swap links, and start it 
up.  So long as both systems are running the same version, it should come up 
cleanly.

I gave up backing up the backup server though.  If you want proper redundancy 
you run backups in parallel, not in a chain.  If one backup server has access 
to the other backup server, then it has the potential (if compromised) to 
destroy all of your backups and originals from one location.  Redundant backups 
should live in separate private enclaves.

ED.



> On 2018, Oct 13, at 8:52 PM, Mike Hughes  wrote:
>
> Another related question: Does it make sense to use rsync's compression when 
> transferring cpool? If that data is already compressed, am I gaining much by 
> having rsync try to compress it again?
> Thanks!
> From: Mike Hughes 
> Sent: Friday, October 12, 2018 8:25 AM
> To: General list for user discussion, questions and support
> Cc: Craig Barratt
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>
> Cool, thanks for the idea Craig. So that will provide a backup of the entire 
> cpool and associated metadata necessary to rebuild hosts in the event of a 
> site loss, but what would that process look like?
>
> Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
> would the recovery process look like? From what I’m thinking I’d have to 
> rsync the entire folder back to the destination site, do a fresh install of 
> BackupPC and associate it with this new folder. Is that about right? Would 
> there not be a method to extract an important bit of data from the cpool 
> without performing an entire site restore? I’m considering the situation 
> where I have data of separate priority. That one cpool might contain several 
> TB of files along with a few important servers of higher priority. The only 
> option looks like a full site restore after rsyncing everything back. Am I 
> thinking of this correctly?
>
> From: Craig Barratt via BackupPC-users 
> Sent: Thursday, October 11, 2018 20:01
> To: General list for user discussion, questions and support 
> 
> Cc: Craig Barratt 
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>
> I'd recommend just using rsync if you want to make a remote copy of the 
> cpool, pc and conf directories, to a place that BackupPC doesn't back up.
>
> Craig
>
> On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes  wrote:
> Hi BackupPC users,
>
> Similar questions have come up a few times but I have not found anything 
> relating to running multiple pools. Here's our setup:
> - On-prem dev servers backed up locally to BackupPC (4.x)
> - Prod servers backed up in the cloud to a separate BackupPC (4.x) instance
>
> I'd like to provide disaster recovery options by syncing the dedup'd pools 
> from on-prem to cloud and vice-versa but this would create an infinite loop. 
> Is it possible to place the off-site data into a separate cpool which I could 
> exclude from the sync? It would also be nice to be able to extract files from 
> the synced pool individually without having to pull down the whole cpool and 
> reproducing the entire BackupPC server.
>
> How do others manage on-prem and off-site backup synchronization?
> Thanks,
> Mike
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: 

Re: [BackupPC-users] syncing local and cloud backups

2018-10-14 Thread ED Fochler
I can answer the rsync compression question.  no.  Running gzip'd data through 
gzip is a waste of CPU power.  Depending on your link and CPU speed it may even 
slow down your ability to transfer data.

As for the recovery from an rsync'd backup...
If your /etc/BackupPC and /var/lib/BackupPC directories are already symlinks to 
other locations, you can easily shut down BackupPC, swap links, and start it 
up.  So long as both systems are running the same version, it should come up 
cleanly.

I gave up backing up the backup server though.  If you want proper redundancy 
you run backups in parallel, not in a chain.  If one backup server has access 
to the other backup server, then it has the potential (if compromised) to 
destroy all of your backups and originals from one location.  Redundant backups 
should live in separate private enclaves.

ED.



> On 2018, Oct 13, at 8:52 PM, Mike Hughes  wrote:
> 
> Another related question: Does it make sense to use rsync's compression when 
> transferring cpool? If that data is already compressed, am I gaining much by 
> having rsync try to compress it again?
> Thanks!
> From: Mike Hughes 
> Sent: Friday, October 12, 2018 8:25 AM
> To: General list for user discussion, questions and support
> Cc: Craig Barratt
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>  
> Cool, thanks for the idea Craig. So that will provide a backup of the entire 
> cpool and associated metadata necessary to rebuild hosts in the event of a 
> site loss, but what would that process look like?
>  
> Say I have the entire ‘/etc/BackupPC’ folder rsynced to an offsite disk. What 
> would the recovery process look like? From what I’m thinking I’d have to 
> rsync the entire folder back to the destination site, do a fresh install of 
> BackupPC and associate it with this new folder. Is that about right? Would 
> there not be a method to extract an important bit of data from the cpool 
> without performing an entire site restore? I’m considering the situation 
> where I have data of separate priority. That one cpool might contain several 
> TB of files along with a few important servers of higher priority. The only 
> option looks like a full site restore after rsyncing everything back. Am I 
> thinking of this correctly?
>  
> From: Craig Barratt via BackupPC-users  
> Sent: Thursday, October 11, 2018 20:01
> To: General list for user discussion, questions and support 
> 
> Cc: Craig Barratt 
> Subject: Re: [BackupPC-users] syncing local and cloud backups
>  
> I'd recommend just using rsync if you want to make a remote copy of the 
> cpool, pc and conf directories, to a place that BackupPC doesn't back up.
>  
> Craig
>  
> On Thu, Oct 11, 2018 at 10:22 AM Mike Hughes  wrote:
> Hi BackupPC users,
> 
> Similar questions have come up a few times but I have not found anything 
> relating to running multiple pools. Here's our setup:
> - On-prem dev servers backed up locally to BackupPC (4.x)
> - Prod servers backed up in the cloud to a separate BackupPC (4.x) instance
> 
> I'd like to provide disaster recovery options by syncing the dedup'd pools 
> from on-prem to cloud and vice-versa but this would create an infinite loop. 
> Is it possible to place the off-site data into a separate cpool which I could 
> exclude from the sync? It would also be nice to be able to extract files from 
> the synced pool individually without having to pull down the whole cpool and 
> reproducing the entire BackupPC server.
> 
> How do others manage on-prem and off-site backup synchronization? 
> Thanks,
> Mike
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/