Re: [BackupPC-users] R: Re: Storage replica

2015-12-20 Thread Jonathan Molyneux
In my experience, migrating/syncing the cpool/pool/pc folder using rsync 
(hard links in tact) isn't a practical exercise (but is technically 
possible).

The size of your cpool/pool/pc data set will have exponential 
implications on the time required to sync (even if no data changes hands).

The significant latency in syncing comes from the computations required 
by rsync and the iops requirements at both sites to gather the metadata 
from disk (really high iops as it's all random and low sector count).

If you are going to keep both sites in sync using rsync look at tuning 
your "vm/vfs_cache_pressure" to ensure your caching metadata over blocks 
(for faster syncs).

Otherwise I would strongly recommend looking at another option (like 
block level replication drdb, or shipping filesystem snapshots in btrfs 
and zfs).

On 21/12/2015 8:59 AM, Adam Goryachev wrote:
> Il 17/12/2015 16:04, absolutely_f...@libero.it ha scritto:
>>> Hi.
>>>
>>> let's say I can't use DRDB or mdadm at all.
>>> If I sync every "pc", one by one, between two storage, this could be working
>>> (even if suboptimal)?
>>> I know that rsync is able to recognize hardlinks (-H) only in same session
>>> (from manpage: " Note that rsync can only detect hard links between files 
>>> that
>>> are inside the transfer set. ").
>>> So I guess, on second storage disk occupation will be bigger.
>>> Thank you very much.
> >From memory, I think you could use rsync to copy the pool/cpool + one pc
> directory, then run rsync again to copy the pool/cpool + second pc
> directory, etc... This should preserve all hardlinks since the pc
> directory is hardlinked to the pool, and you copy the pool on every run.
>
> Not sure if that data set will work well or not, depending on number of
> files in your pool I guess, and also number of files per pc, and number
> of backups retained.
>
> Regards,
> Adam
>


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Re: Storage replica

2015-12-20 Thread Adam Goryachev
Il 17/12/2015 16:04, absolutely_f...@libero.it ha scritto:
>> Hi.
>>
>> let's say I can't use DRDB or mdadm at all.
>> If I sync every "pc", one by one, between two storage, this could be working
>> (even if suboptimal)?
>> I know that rsync is able to recognize hardlinks (-H) only in same session
>> (from manpage: " Note that rsync can only detect hard links between files 
>> that
>> are inside the transfer set. ").
>> So I guess, on second storage disk occupation will be bigger.
>> Thank you very much.

>From memory, I think you could use rsync to copy the pool/cpool + one pc
directory, then run rsync again to copy the pool/cpool + second pc
directory, etc... This should preserve all hardlinks since the pc
directory is hardlinked to the pool, and you copy the pool on every run.

Not sure if that data set will work well or not, depending on number of
files in your pool I guess, and also number of files per pc, and number
of backups retained.

Regards,
Adam

-- 
Adam Goryachev
Website Managers
www.websitemanagers.com.au


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Re: Storage replica

2015-12-18 Thread Paolo Basenghi
If you hate yourself, why not!  ;-)
I suspect you need knowledge on how BPC organizes the pool to find every 
reference to every single file belonging to a single PC...

On the web you can find already made scripts that do something similar 
(I don't remember the links), but my guess is that rsync is not your 
tool, not with 2,5 TiB pool...
Regards
Paolo



Il 17/12/2015 16:04, absolutely_f...@libero.it ha scritto:
> Hi.
>
> let's say I can't use DRDB or mdadm at all.
> If I sync every "pc", one by one, between two storage, this could be working
> (even if suboptimal)?
> I know that rsync is able to recognize hardlinks (-H) only in same session
> (from manpage: " Note that rsync can only detect hard links between files that
> are inside the transfer set. ").
> So I guess, on second storage disk occupation will be bigger.
> Thank you very much.
>
>
>
>


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] R: Re: Storage replica

2015-12-17 Thread absolutely_f...@libero.it
Hi.

let's say I can't use DRDB or mdadm at all.
If I sync every "pc", one by one, between two storage, this could be working 
(even if suboptimal)?
I know that rsync is able to recognize hardlinks (-H) only in same session 
(from manpage: " Note that rsync can only detect hard links between files that 
are inside the transfer set. ").
So I guess, on second storage disk occupation will be bigger.
Thank you very much.



>Messaggio originale
>Da: paolo.basen...@fcr.re.it
>Data: 16/12/2015 9.45
>A: 
>Ogg: Re: [BackupPC-users] Storage replica
>
>I've got QNAP too.
>Initially tried with BPC installed directly on the NAS, but the NAS OS 
>is too limited and the risk that BPC does not survive a firmware update 
>is high, IMHO.
>
>But the last two QNAP firmware generations have an interesting new 
>feature: KVM virtualization.
>
>I installed a CentOS 7 VM on each NAS and installed BPC and DRBD on 
>CentOS. You can even follow Christian's smart advices and do iSCSI+RAID1 
>on the VM, while the NAS maintains standard configuration and no addons 
>not supported by QNAP.
>If your QNAP NAS has sufficient amount of RAM and CPU power, I highly 
>recommend this configuration.
>
>By the way: with firmware 4.0.x you need a dedicated physical ethernet 
>for each virtual ethernet. With 4.2.x firmwares, virtual switch feature 
>removes that limitation.
>
>Regards
>Paolo
>
>
>
>Il 15/12/2015 20:49, Christian Völker ha scritto:
>> Well, this information would have been helpful before ;)
>>
>> So in this case instead of trying to add DRBD to the QNAP host I would
>> suggest you export an iSCSI target to the BackupPC host. Add iSCSI
>> client to your BackupPC server and use the iSCSI target as underlying
>> device for local RAID1. Thus, you always have an up-to-date secondary
>> device available. Additionally use snapshot functionality of QNAP and
>> you even have protection against filesystem failures,
>>
>>
>> Greetings
>>
>> Christian
>>
>> Am 15.12.2015 um 18:44 schrieb absolutely_f...@libero.it:
>>> Hi Stephen,
>>> sorry, I forgot to mention that my secondary storage is a QNAP device.
>>> Actually there is a way to install BackupPC on it:
>>>
>>> http://wiki.qnap.com/wiki/How_to_install_the_BackupPC_application
>>>
>>> Anyway, I would prefer keeping configuration as much standard as possible.
>>> My choice is limited to QNAP daemon (NFS, rsyncd, samba).
>>> Thankyou!
>>>
>>>
>>>
>>>> Messaggio originale
>>>> Da: step...@physics.unc.edu
>>>> Data: 15/12/2015 14.41
>>>> A: "absolutely_f...@libero.it", >> us...@lists.sourceforge.net>
>>>> Ogg: Re: [BackupPC-users] R: Re:  R: Re: Storage replica
>>>>
>>>> (Unless someone mentioned it and I missed it), I'm surprised no one has 
yet
>>>> offered the standard reply: stand up a 2nd independent BackupPC server.
>>>>
>>>> Because it's totally separate, you're free to configure it identically to
>>>> the first one or if it's simply for DR, set up a different backup 
schedule
>>>> (ie, weekly or monthly rather than daily) and retention period -- for
>>>> example keeping only the last 2 backups rather than a long backup
>>>> history... Easy to adjust to fit your available storage and business 
needs.
>>>>
>>>> Slightly more work up front, but easy to perform restores without 
depending
>>>> on another server.
>>>>
>>>> Hth.
>>>> ~Stephen
>>>>
>>>> On Mon, 14 Dec 2015, absolutely_f...@libero.it wrote:
>>>>
>>>>> Hi,
>>>>> thanks to both :)
>>>>> DRDB sounds interesting :)
>>>>>
>>>>>
>>>>>> Messaggio originale
>>>>>> Da: chrisc...@knebb.de
>>>>>> Data: 14/12/2015 15.45
>>>>>> A: 
>>>>>> Ogg: Re: [BackupPC-users] R: Re: Storage replica
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> sorry, using rsync for this purpose is absolutely not recommended!
>>>>>>
>>>>>> As always, it depends on what you want to get. If you do not mind 
having
>>>>>> old data as long as you have it, it might be fine with rsync running
>>>>>> once a month. You have a pool of 2.5TB- on my pool of 1.4TB I aborted
>>>>>> rsync after 2days! So you might 

Re: [BackupPC-users] R: Re: Storage replica

2015-12-15 Thread Christian Völker
Hi,

yes, these would be the recommended steps. Just step 2) and 3)  would be
somehow different.

Keep in mind you create (yes, with mdadm) a new block device.. So you
can write a new (empty!) file system on it, mount it to
/var/lib/BackupPC and start using it. But this would mean you loose you
old data. I guess this is no good.

So you need to push your data to the new device- so I would recommenend
to create the RAID1 with a missing disk (this is possible with mdadm).
Then do a block-based copy with "dd" and transfer the whole filesystem
to the degraded device. Obviously this should be done with the source
file system unmounted and unusedand the target device should have at
least the size of the source. "dd" inf 2.5TB will be days faster than
rsyncing it.
Once done, mount the RAID1 7o /var/lib/BackupPC and start BackupPC
(perhaps perform a fscck and resize2fs before). Then you can delete your
old data and add the disk/ paretition on the local computer to the RAID1
device. mdadm will start syncing and another day (or 2) later the device
is in in sync and you have your data on both places.


To configure
Am 15.12.2015 um 23:36 schrieb absolutely_f...@libero.it:
> Hi,
> thank you for your suggestion.
> At moment I have BackupPC host that use local filesystem as pool (let's say 
> /backuppc folder).
> If I understood, necessary steps are:
> 1) add iSCSI client on BackupPC, to be able to use iSCSI target on QNAP
> 2) create RAID1 (with mdadm??) with local filesystem AND iSCSI
> 3) reconfigure backuppc to use folder on raid1 device
> correct?
>
> Thankyou!
>
>
>
>> Messaggio originale
>> Da: chrisc...@knebb.de
>> Data: 15/12/2015 20.49
>> A: "absolutely_f...@libero.it", "General list for 
> user discussion, questions and support"
>> Ogg: Re: Storage replica
>>
>> Well, this information would have been helpful before ;)
>>
>> So in this case instead of trying to add DRBD to the QNAP host I would
>> suggest you export an iSCSI target to the BackupPC host. Add iSCSI 
>> client to your BackupPC server and use the iSCSI target as underlying
>> device for local RAID1. Thus, you always have an up-to-date secondary
>> device available. Additionally use snapshot functionality of QNAP and
>> you even have protection against filesystem failures,
>>
>>
>> Greetings
>>
>> Christian
>>
>> Am 15.12.2015 um 18:44 schrieb absolutely_f...@libero.it:
>>> Hi Stephen,
>>> sorry, I forgot to mention that my secondary storage is a QNAP device.
>>> Actually there is a way to install BackupPC on it:
>>>
>>> http://wiki.qnap.com/wiki/How_to_install_the_BackupPC_application
>>>
>>> Anyway, I would prefer keeping configuration as much standard as possible.
>>> My choice is limited to QNAP daemon (NFS, rsyncd, samba).
>>> Thankyou!
>>>
>>>
>>>
>>>> Messaggio originale
>>>> Da: step...@physics.unc.edu
>>>> Data: 15/12/2015 14.41
>>>> A: "absolutely_f...@libero.it", >> us...@lists.sourceforge.net>
>>>> Ogg: Re: [BackupPC-users] R: Re:  R: Re: Storage replica
>>>>
>>>> (Unless someone mentioned it and I missed it), I'm surprised no one has 
> yet 
>>>> offered the standard reply: stand up a 2nd independent BackupPC server.
>>>>
>>>> Because it's totally separate, you're free to configure it identically to 
>>>> the first one or if it's simply for DR, set up a different backup 
> schedule 
>>>> (ie, weekly or monthly rather than daily) and retention period -- for 
>>>> example keeping only the last 2 backups rather than a long backup 
>>>> history... Easy to adjust to fit your available storage and business 
> needs.
>>>> Slightly more work up front, but easy to perform restores without 
> depending 
>>>> on another server.
>>>>
>>>> Hth.
>>>> ~Stephen
>>>>
>>>> On Mon, 14 Dec 2015, absolutely_f...@libero.it wrote:
>>>>
>>>>> Hi,
>>>>> thanks to both :)
>>>>> DRDB sounds interesting :)
>>>>>
>>>>>
>>>>>> Messaggio originale
>>>>>> Da: chrisc...@knebb.de
>>>>>> Data: 14/12/2015 15.45
>>>>>> A: 
>>>>>> Ogg: Re: [BackupPC-users] R: Re: Storage replica
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> sorry, using rsync for this purpose is absolutely not recommended!
&g

[BackupPC-users] R: Re: Storage replica

2015-12-15 Thread absolutely_f...@libero.it
Hi,
thank you for your suggestion.
At moment I have BackupPC host that use local filesystem as pool (let's say 
/backuppc folder).
If I understood, necessary steps are:
1) add iSCSI client on BackupPC, to be able to use iSCSI target on QNAP
2) create RAID1 (with mdadm??) with local filesystem AND iSCSI
3) reconfigure backuppc to use folder on raid1 device
correct?

Thankyou!



>Messaggio originale
>Da: chrisc...@knebb.de
>Data: 15/12/2015 20.49
>A: "absolutely_f...@libero.it", "General list for 
user discussion, questions and support"
>Ogg: Re: Storage replica
>
>Well, this information would have been helpful before ;)
>
>So in this case instead of trying to add DRBD to the QNAP host I would
>suggest you export an iSCSI target to the BackupPC host. Add iSCSI 
>client to your BackupPC server and use the iSCSI target as underlying
>device for local RAID1. Thus, you always have an up-to-date secondary
>device available. Additionally use snapshot functionality of QNAP and
>you even have protection against filesystem failures,
>
>
>Greetings
>
>Christian
>
>Am 15.12.2015 um 18:44 schrieb absolutely_f...@libero.it:
>> Hi Stephen,
>> sorry, I forgot to mention that my secondary storage is a QNAP device.
>> Actually there is a way to install BackupPC on it:
>>
>> http://wiki.qnap.com/wiki/How_to_install_the_BackupPC_application
>>
>> Anyway, I would prefer keeping configuration as much standard as possible.
>> My choice is limited to QNAP daemon (NFS, rsyncd, samba).
>> Thankyou!
>>
>>
>>
>>> Messaggio originale
>>> Da: step...@physics.unc.edu
>>> Data: 15/12/2015 14.41
>>> A: "absolutely_f...@libero.it", > us...@lists.sourceforge.net>
>>> Ogg: Re: [BackupPC-users] R: Re:  R: Re: Storage replica
>>>
>>> (Unless someone mentioned it and I missed it), I'm surprised no one has 
yet 
>>> offered the standard reply: stand up a 2nd independent BackupPC server.
>>>
>>> Because it's totally separate, you're free to configure it identically to 
>>> the first one or if it's simply for DR, set up a different backup 
schedule 
>>> (ie, weekly or monthly rather than daily) and retention period -- for 
>>> example keeping only the last 2 backups rather than a long backup 
>>> history... Easy to adjust to fit your available storage and business 
needs.
>>>
>>> Slightly more work up front, but easy to perform restores without 
depending 
>>> on another server.
>>>
>>> Hth.
>>> ~Stephen
>>>
>>> On Mon, 14 Dec 2015, absolutely_f...@libero.it wrote:
>>>
>>>> Hi,
>>>> thanks to both :)
>>>> DRDB sounds interesting :)
>>>>
>>>>
>>>>> Messaggio originale
>>>>> Da: chrisc...@knebb.de
>>>>> Data: 14/12/2015 15.45
>>>>> A: 
>>>>> Ogg: Re: [BackupPC-users] R: Re: Storage replica
>>>>>
>>>>> Hi,
>>>>>
>>>>> sorry, using rsync for this purpose is absolutely not recommended!
>>>>>
>>>>> As always, it depends on what you want to get. If you do not mind having
>>>>> old data as long as you have it, it might be fine with rsync running
>>>>> once a month. You have a pool of 2.5TB- on my pool of 1.4TB I aborted
>>>>> rsync after 2days! So you might need 3days or more for a ful rsync run.
>>>>> I doubt you want it this way!
>>>>>
>>>>> There is no easy ways to have them always in sync. All file level
>>>>> methods are supposed to take ages because of the hardlinks. So you might
>>>>> want to use block based duplication.
>>>>> One possibility is DRBD (which I do here). It is RAID1 through network.
>>>>> If you do not want the remote node slow down local file access you might
>>>>> think of a periodic disconnect and reconnect. Besides of this it appears
>>>>> to be rock stable and reliable.
>>>>> Another possibility are of course distributed file systems. But as you
>>>>> do not need write access on remote as long as primary is alive it might
>>>>> be overkill.
>>>>> Last suggestion is ZFS which I do not know at all. But it appears to
>>>>> have some functionality. Try it.
>>>>>
>>>>&g

Re: [BackupPC-users] R: Re: Storage replica

2015-12-14 Thread Christian Völker
Hi,

sorry, using rsync for this purpose is absolutely not recommended!

As always, it depends on what you want to get. If you do not mind having
old data as long as you have it, it might be fine with rsync running
once a month. You have a pool of 2.5TB- on my pool of 1.4TB I aborted
rsync after 2days! So you might need 3days or more for a ful rsync run.
I doubt you want it this way!

There is no easy ways to have them always in sync. All file level
methods are supposed to take ages because of the hardlinks. So you might
want to use block based duplication.
One possibility is DRBD (which I do here). It is RAID1 through network.
If you do not want the remote node slow down local file access you might
think of a periodic disconnect and reconnect. Besides of this it appears
to be rock stable and reliable.
Another possibility are of course distributed file systems. But as you
do not need write access on remote as long as primary is alive it might
be overkill.
Last suggestion is ZFS which I do not know at all. But it appears to
have some functionality. Try it.

I would say use DRBD ;) And definetly forget about rsync!

Greeting

Christian



--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Re: Storage replica

2015-12-14 Thread Alexander Moisseev
On 14.12.15 14:33, absolutely_f...@libero.it wrote:
> Hi Paolo,
> thank you for your quick reply.
>
> My pool is 2.5 TB.
>
> Sincerely, speed is  not crucial in my case: I just need to store data on a 
> different location, for disaster recovery.
> My doubt is: if I keep two pools synced with rsync, will I able to use it? Or 
> should I care about something in particular?
>
Yes, you will be able to use it assuming you synced whole __TOPDIR__ (and 
__CONFDIR__ backup will also be helpful).
But rsyncing a pool that contains a lot of hardlinks will take unacceptable 
very very long time and require a lot of RAM.

You might should consider some block level backup methods, but it depends on 
file system you are using.
For instance for FFS(UFS) dump does the job. I never tried BackupPC on ZFS, but 
I believe 'zfs send' should work too.
--
Alexander


--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] R: Re: Storage replica

2015-12-14 Thread absolutely_f...@libero.it
Hi Paolo,thank you for your quick reply.
My pool is 2.5 TB.
Sincerely, speed is  not crucial in my case: I just need to store data on a 
different location, for disaster recovery.My doubt is: if I keep two pools 
synced with rsync, will I able to use it? Or should I care about something in 
particular?
Thank you again

Messaggio originale

Da: paolo.basen...@fcr.re.it

Data: 14/12/2015 11.06

A: 

Ogg: Re: [BackupPC-users] Storage replica




  
  
Hi,

rsync is a good solution if your pool is not very big.

That because you have to keep hard links in the mirror location and
rsync is not very efficient in hard link mirroring.



I tried with rsync in the beginning but mirroring time was too long
even on a 1 Gbs local link (two pools: 750 GiB and 350 GiB)



My solution is drbd (drbd.org). Two CentOS 7 nodes each with
BackupPC 3.3.1; each node has a pool located on a drbd backed
partition.

After the initial sync, more than a day long, pools are mirrored
very quickly. Even after taking down one host for maintenance for
some hours, the resync is quick.



Best regards

Paolo







Il 14/12/2015 10:08,
  absolutely_f...@libero.it ha scritto:


Hi,
  I would like to copy entire Backuppc pool to a secondary
storage on a remote location (for disaster recovery).
  How can I do this?
  For example, is it possible to rsync data between two storage
while backuppc is running?
  Thank you very much
  

  
  

  
  

  
--

  

  
  

  ___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/




  





--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/