Re: [BackupPC-users] Latest rsync.exe and cygwin1.dll for Windows 7.

2011-07-11 Thread દિનેશ શાહ/दिनेश शाह
Hello Group,

On Sun, Jul 10, 2011 at 8:39 PM, Doug Lytle  wrote:
> Xuo wrote:
>> can anybody provide a tar/zip
>
> I'd love to see this as well!

We @ Shah Micro System Pvt. Ltd. are working to package the latest
versions of Cygwin + OpenSSH + RSync in single setup.exe file.

We hope to release the setup binary later this month and installer
sources later.

>
> Doug

HTH
With regards,
-- 
--Dinesh Shah :-)
Shah Micro System Pvt. Ltd.
+91-98213-11906
+91-9833-TICKET
http://www.shahmicro.com
http://iopt.in
http://crm.iopt.in
Blog: http://dineshah.wordpress.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Latest rsync.exe and cygwin1.dll for Windows 7.

2011-07-11 Thread Doug Lytle
Dinesh Shah (દિનેશ શાહ/दिनेश शाह) wrote:
> We hope to release the setup binary later this month and installer
> sources later.

Excellent!

Thank you very much,

Doug

-- 
Ben Franklin quote:

"Those who would give up Essential Liberty to purchase a little Temporary 
Safety, deserve neither Liberty nor Safety."


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] "Other" xfer errors?

2011-07-11 Thread Timothy Murphy
What are "other" transfer errors, as in

18 xferErrs (0 bad files, 0 bad shares, 18 other)
?

-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncd on mac

2011-07-11 Thread Eduardo Díaz Rodríguez
Negotiated protocol version 28


Hi I see that the File-RsyncP-0.70 only runs to Protocol version 28. 
but I see interesting facts for backup MAC systems.

the default mac version 2.6.8 is very slow in my test.

I put the last version of rsync 3.0.7 and see a speed improbe.

My questions are two.

1, What is the best and the faster option for backup remote files?? 
(rsync or rsyncd) for me is rsyncd.

for example.

two full

rsync 2.6.8(mac version) 541593.5 (files in MB) TIME: 448.1
rsync 3.0.7(https://github.com/MacLemon/mlbackup/) 543157.5 (files in 
MB) TIME: 276.1.
use the last version becasue is more faster...

2,  There is any chances that the File-RsyncP perl module future 
support protocol 30 in a future?

Regards!

P.D. anybody make test?

Is interesting put this in wiki? I have the complete manual for make 
rsyncd running in MAC OS. ;-)




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Latest rsync.exe and cygwin1.dll for Windows 7.

2011-07-11 Thread Eduardo Díaz Rodríguez
Hi I have a windows 7 Spanish UTF8 files for backup my pc.

I use aways the same configuration.

DeltaCopy.

Overwrite the files with this.

http://www.pk25.com/temp/DeltaCopyFiles_cygwin1.7.7z

This is UTF-8 support files long directorys and all you need.

regards


On Mon, 11 Jul 2011 05:34:21 -0400, Doug Lytle wrote:
> Dinesh Shah (દિનેશ શાહ/दिनेश शाह) wrote:
>> We hope to release the setup binary later this month and installer
>> sources later.
>
> Excellent!
>
> Thank you very much,
>
> Doug
>
> --
> Ben Franklin quote:
>
> "Those who would give up Essential Liberty to purchase a little
> Temporary Safety, deserve neither Liberty nor Safety."
>
>
> 
> --
> All of the data generated in your IT infrastructure is seriously 
> valuable.
> Why? It contains a definitive record of application performance, 
> security
> threats, fraudulent activity, and more. Splunk takes this data and 
> makes
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-- 
===

Si Jesús salva, Norton hace Backup.
-- Www.frases.com.

===

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] unsubcribe

2011-07-11 Thread vicent roca daniel
2011/7/11 Eduardo Díaz Rodríguez 

> Negotiated protocol version 28
>
>
> Hi I see that the File-RsyncP-0.70 only runs to Protocol version 28.
> but I see interesting facts for backup MAC systems.
>
> the default mac version 2.6.8 is very slow in my test.
>
> I put the last version of rsync 3.0.7 and see a speed improbe.
>
> My questions are two.
>
> 1, What is the best and the faster option for backup remote files??
> (rsync or rsyncd) for me is rsyncd.
>
> for example.
>
> two full
>
> rsync 2.6.8(mac version) 541593.5 (files in MB) TIME: 448.1
> rsync 3.0.7(https://github.com/MacLemon/mlbackup/) 543157.5 (files in
> MB) TIME: 276.1.
> use the last version becasue is more faster...
>
> 2,  There is any chances that the File-RsyncP perl module future
> support protocol 30 in a future?
>
> Regards!
>
> P.D. anybody make test?
>
> Is interesting put this in wiki? I have the complete manual for make
> rsyncd running in MAC OS. ;-)
>
>
>
>
>
> --
> All of the data generated in your IT infrastructure is seriously valuable.
> Why? It contains a definitive record of application performance, security
> threats, fraudulent activity, and more. Splunk takes this data and makes
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Carl Wilhelm Soderstrom
On 07/10 08:22 , Christian Völker wrote:
> So what I did was to set up a physical small sized box (old desktop
> should work). No RAID involved. I installed there distributed remote
> block device (drbd- use Google). Same on backuppc machine. So I have a
> physical separated RAID1 available- just through network. Both drbd
> devices are using LVM volumes as backing devices so I can enlarge/
> shrink at will. The external server addditionaly uses the snaprotate.pl
> script to create 4 snapshots of the drbd device at weekly rate.

So you're running LVM and DRBD on the ESX guest machine?
Isn't that a notable performance hit? Especially keeping 4 snapshots
simultaneously?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Location of TopDir

2011-07-11 Thread Mark Phillips
Can TopDir be a network drive?

I have a NAS, and I would like to use rsync to store the data on the NAS
from the backuppc server. Is this possible? How do I configure it?

Mark
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Location of TopDir

2011-07-11 Thread Andy Stetzinger
The short answer is yes. 

I've got backuppc running on a virtual machine, with a 24TB FreeNAS NFS mount 
set up. 

>From my config.pl 
# TopDir - where all the backup data is stored 
$Conf{TopDir} = '/mnt/tank'; 

So, just make a directory on the backuppc machine, in my case, /mnt/tank, and 
NFS mount it: 

$ sudo mount -t nfs 192.168.X.X:/mnt/tank /mnt/tank 


That's the real short version. 

- "Mark Phillips"  wrote: 
> Can TopDir be a network drive? 
> 
> I have a NAS, and I would like to use rsync to store the data on the NAS from 
> the backuppc server. Is this possible? How do I configure it? 
> 
> Mark 
> 
> --
>  
> All of the data generated in your IT infrastructure is seriously valuable. 
> Why? It contains a definitive record of application performance, security 
> threats, fraudulent activity, and more. Splunk takes this data and makes 
> sense of it. IT sense. And common sense. 
> http://p.sf.net/sfu/splunk-d2d-c2 
> ___ 
> BackupPC-users mailing list 
> BackupPC-users@lists.sourceforge.net 
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users 
> Wiki: http://backuppc.wiki.sourceforge.net 
> Project: http://backuppc.sourceforge.net/ 
> 

-- 
Andy Stetzinger 
Riptide Software 
Information Technology 
V: 321-296-7724 ext: 208 
http://www.riptidesoftware.com 
║▌║█║▌║▌││║▌║█║▌│║▌║█║▌║▌││║▌║ 



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Christian Völker
On 11/07/2011 16:15, Carl Wilhelm Soderstrom wrote:
>
> So you're running LVM and DRBD on the ESX guest machine?
> Isn't that a notable performance hit? Especially keeping 4 snapshots
> simultaneously?
>
I'm running LVM on a virtual machine and on top drbd, yes. But without
snapshots. The snapshots are taken only on the physical box. There's no
sense in taking snapshots on both. The virtual machine is stored on a
RAID10 on the ESX.

And performance? Well, I don't know. It backups up my around 20 machines
(mostly Linux) without any issues. I don't mind the backup or re-org of
the files taking 30% longer...it's still all done within 24hours.

Greetings

Christian




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Location of TopDir

2011-07-11 Thread Richard Shaw
On Mon, Jul 11, 2011 at 11:04 AM, Mark Phillips
 wrote:
> Can TopDir be a network drive?
>
> I have a NAS, and I would like to use rsync to store the data on the NAS
> from the backuppc server. Is this possible? How do I configure it?

To add to Andy's comments. Obviously the underlying file-system still
has to support hardlinks...

An alternative to changing TopDir is to symlink or bind mount the
storage to the default TopDir. Two reasons for this are:

1. If you change backup location you just update the symlink or bind command
2. If you're running on a system with SELinux enabled (i.e.,
enforcing) it helps avoid SELinux policy issues depending on if there
are specific context/policies enforced for BackupPC (like Fedora).

Richard

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Carl Wilhelm Soderstrom
On 07/11 07:54 , Christian Völker wrote:
> On 11/07/2011 16:15, Carl Wilhelm Soderstrom wrote:
> >
> > So you're running LVM and DRBD on the ESX guest machine?
> > Isn't that a notable performance hit? Especially keeping 4 snapshots
> > simultaneously?
> >
> I'm running LVM on a virtual machine and on top drbd, yes. But without
> snapshots. The snapshots are taken only on the physical box. There's no
> sense in taking snapshots on both. The virtual machine is stored on a
> RAID10 on the ESX.
> 
> And performance? Well, I don't know. It backups up my around 20 machines
> (mostly Linux) without any issues. I don't mind the backup or re-org of
> the files taking 30% longer...it's still all done within 24hours.

Ok. Thanks for letting me know. Glad it works for you in your environment.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Eduardo Díaz Rodríguez
I have a similar situation but apply diferent way.

One cluster two machines. one service (samba) the RAID1 software is 
used by drbd.

Every cluster has one hard disk for backups (sda for data(drbd) and SO, 
and sdb backup-pc, and dump of the OS).

the backup normaly is in local now I use rsyncd every server make a 
copy of the data using the IP of the cluster. rsync to IP of the 
cluster, and get de data.

two same copys... :-)..

On Sun, 10 Jul 2011 08:22:25 +0200, Christian Völker wrote:
> Hi,
> I just want to share my solution to keep an additional "backup" from 
> the
> original BackupPC store.
>
> As we all know it's not really a good solution to rsync the BackupPC
> datastore to somewhere else- due to the hardlinks. Doing manual image
> copies (ie by swapping the drives of a RAID-1 array) has the big
> disadvantage as it's a manual step.
>
> So I decided to combine a couple of other techniques here:
> First, my BackupPC is running as a virtual machine on VMware ESX host
> sharing datastore and resources with the machines to back up. So the
> obvious disadvantage is the case when the ESX host fails- how should 
> I
> restore this guy and the BackuPC machine? Well ESX is fairly stable 
> but
> you never know.
> My storage uses in total 952GB of backup data. So it's really no good
> idea to do an rsync here. Swapping drives manually is no good either 
> as
> the ESX host would complain.
>
> So what I did was to set up a physical small sized box (old desktop
> should work). No RAID involved. I installed there distributed remote
> block device (drbd- use Google). Same on backuppc machine. So I have 
> a
> physical separated RAID1 available- just through network. Both drbd
> devices are using LVM volumes as backing devices so I can enlarge/
> shrink at will. The external server addditionaly uses the 
> snaprotate.pl
> script to create 4 snapshots of the drbd device at weekly rate.
> The drbd device has BackuPC installed, too. So I can easily tell him 
> to
> take over and restore.
>
> So with my setup I'm nearly prepared for everything at relatively low 
> cost.
> -backupc itself fails
> +drbd one will take over after some minor (manual) steps.
> -backupc wipes out it's storage (script failure or file system issue)
> +I will roll back on the drbd to one of the previous LVM snapshots 
> (up
> to four weeks back)
> -ESX host fails without removing backuppc
> +Set drbd as primary and restore ESX (or just reinstall, it's faster)
> -ESX host fails with wiping out the backuppc VM
> +Set drbd as primary and restore everything from there on
>
> So in summary I can easily keep my backuppc storage remotely in sync
> with drbd and keep snaphshots to roll back weeks. The initial sync 
> and
> data migration to drbd device took 24hours while- you could reduce
> backuppc downtime by not doing the dd command in parallel to the 
> initial
> sync.
>
> Only disadvantage is the physical drbd is currently in same building 
> (my
> home) as the original ESX host. But this is not likely to change due 
> to
> my low external uplink bandwidth here- someone could use the drbd 
> proxy
> to use small lines for sync. But this proxy is not available as free
> software. For me it's fine- if my house burns down I have more 
> serious
> issues than my BackupPC storage ;-)
>
> GReetings
>
> Christian
>
>
>
>
> 
> --
> All of the data generated in your IT infrastructure is seriously 
> valuable.
> Why? It contains a definitive record of application performance, 
> security
> threats, fraudulent activity, and more. Splunk takes this data and 
> makes
> sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-d2d-c2
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

-- 
===

Si Jesús salva, Norton hace Backup.
-- Www.frases.com.

===

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Latest rsync.exe and cygwin1.dll for Windows 7.

2011-07-11 Thread Xuo
Le 11/07/2011 09:53, Dinesh Shah (દિનેશ શાહ/दिनेश शाह) a écrit :
> Hello Group,
>
> On Sun, Jul 10, 2011 at 8:39 PM, Doug Lytle  wrote:
>> Xuo wrote:
>>> can anybody provide a tar/zip
>> I'd love to see this as well!
> We @ Shah Micro System Pvt. Ltd. are working to package the latest
> versions of Cygwin + OpenSSH + RSync in single setup.exe file.
>
> We hope to release the setup binary later this month and installer
> sources later.
Great !!

Xuo.
>
>> Doug
> HTH
> With regards,


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Latest rsync.exe and cygwin1.dll for Windows 7.

2011-07-11 Thread Xuo
Le 11/07/2011 12:25, Eduardo Díaz Rodríguez a écrit :
> Hi I have a windows 7 Spanish UTF8 files for backup my pc.
>
> I use aways the same configuration.
>
> DeltaCopy.
>
> Overwrite the files with this.
>
> http://www.pk25.com/temp/DeltaCopyFiles_cygwin1.7.7z
Hi,

I'll try to use your files on next week end.

Thank you.

Xuo.
>
> This is UTF-8 support files long directorys and all you need.
>
> regards
>
>
> On Mon, 11 Jul 2011 05:34:21 -0400, Doug Lytle wrote:
>> Dinesh Shah (દિનેશ શાહ/दिनेश शाह) wrote:
>>> We hope to release the setup binary later this month and installer
>>> sources later.
>> Excellent!
>>
>> Thank you very much,
>>
>> Doug
>>
>> --
>> Ben Franklin quote:
>>
>> "Those who would give up Essential Liberty to purchase a little
>> Temporary Safety, deserve neither Liberty nor Safety."
>>
>>
>>
>> --
>> All of the data generated in your IT infrastructure is seriously 
>> valuable.
>> Why? It contains a definitive record of application performance, 
>> security
>> threats, fraudulent activity, and more. Splunk takes this data and 
>> makes
>> sense of it. IT sense. And common sense.
>> http://p.sf.net/sfu/splunk-d2d-c2
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Les Mikesell
On 7/11/2011 1:43 PM, Eduardo Díaz Rodríguez wrote:
> I have a similar situation but apply diferent way.
>
> One cluster two machines. one service (samba) the RAID1 software is
> used by drbd.
>
> Every cluster has one hard disk for backups (sda for data(drbd) and SO,
> and sdb backup-pc, and dump of the OS).
>
> the backup normaly is in local now I use rsyncd every server make a
> copy of the data using the IP of the cluster. rsync to IP of the
> cluster, and get de data.
>
> two same copys... :-)..

Without the lvm snapshots, isn't there a danger of something corrupting 
the master server's filesystem and having it propagate to the drbd copy 
instantly?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Location of TopDir

2011-07-11 Thread Les Mikesell
On 7/11/2011 1:08 PM, Richard Shaw wrote:
> On Mon, Jul 11, 2011 at 11:04 AM, Mark Phillips
>   wrote:
>> Can TopDir be a network drive?
>>
>> I have a NAS, and I would like to use rsync to store the data on the NAS
>> from the backuppc server. Is this possible? How do I configure it?
>
> To add to Andy's comments. Obviously the underlying file-system still
> has to support hardlinks...

Which means you need to mount it via NFS, not CIFS.

> An alternative to changing TopDir is to symlink or bind mount the
> storage to the default TopDir. Two reasons for this are:
>
> 1. If you change backup location you just update the symlink or bind command
> 2. If you're running on a system with SELinux enabled (i.e.,
> enforcing) it helps avoid SELinux policy issues depending on if there
> are specific context/policies enforced for BackupPC (like Fedora).

If you are using a packaged install (.deb/.rpm) earlier than 3.2 you'll 
to use the symlink/mount approach to keep the expected TopDir location 
(normally /var/lib/backuppc) set by the package builder.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Location of TopDir

2011-07-11 Thread Holger Parplies
Hi,

Mark Phillips wrote on 2011-07-11 09:04:31 -0700 [[BackupPC-users] Location of 
TopDir]:
> Can TopDir be a network drive?

no, BackupPC doesn't run on Windoze.

> I have a NAS, and I would like to use rsync to store the data on the NAS
> from the backuppc server. Is this possible?

Not really, if your pool is or will at any point be "large" (where the exact
value of "large" depends on available RAM and rsync version). Try it out to
see if it works for you, but don't rely on it. It just doesn't scale well for
growing pools.


Ok. Now to the question you probably meant to ask.

Sure, you can mount the pool FS via NFS (or anything else that will support
hardlinks correctly - note that this requirement includes the file system
behind the network mount; you can't export eg. a FAT file system via NFS and
expect that to work, so if you're talking about a NAS, you'll have to check).

You should note, though, that BackupPC does a lot of I/O on the pool, so
slowing down this part (as a network mount is bound to do) will slow down
overall backup performance. Furthermore, there have been reports of problems
with broken NFS implementations on some NAS devices which only become apparent
under heavy usage such as with BackupPC.

For details on moving TopDir, see
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Christian Völker
On 11/07/2011 21:03, Les Mikesell wrote:
>
> Without the lvm snapshots, isn't there a danger of something corrupting 
> the master server's filesystem and having it propagate to the drbd copy 
> instantly?
>
You're absolutely right. And this is the reason why I have the LVM
snapshots. I can go back 5 weeks with the snapshots.  That's enough to
prevent any serious issues. When the file system gets unreadable I
usually notice it immediately- and roll back to previous snapshot.

BTW: The same would happen with the often so proposed "take off a disk
of your RAID1". In some way you have to trust the filesystem. Of course
rsync'ing it from host A with ext3 to host B with XFS would be a better
solution security wise. But as you know rsync is not the best solution here.

I trust my file system at least for 5 weeks ;)

Greetings

Christian

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Les Mikesell
On 7/11/2011 2:13 PM, Christian Völker wrote:
>
>> Without the lvm snapshots, isn't there a danger of something corrupting
>> the master server's filesystem and having it propagate to the drbd copy
>> instantly?
>>
> You're absolutely right. And this is the reason why I have the LVM
> snapshots. I can go back 5 weeks with the snapshots.  That's enough to
> prevent any serious issues. When the file system gets unreadable I
> usually notice it immediately- and roll back to previous snapshot.
>
> BTW: The same would happen with the often so proposed "take off a disk
> of your RAID1".

The way my 'take a disk off RAID1' works is that there are 3 spare 
disks, with at least one always offsite in the rotation and another one 
wouldn't be brought back if there was any reason to suspect that the 
filesystem was corrupt as copied on the most recent.

> In some way you have to trust the filesystem.

You have to trust that it works when it appears to be working.  You 
don't have to trust it to keep working through your next copy.

> Of course
> rsync'ing it from host A with ext3 to host B with XFS would be a better
> solution security wise. But as you know rsync is not the best solution here.

Even rsync'ing would leave you in a strange state if the source dies in 
mid-copy to your only target.

> I trust my file system at least for 5 weeks ;)

I don't trust anything in the same building or anything that can be 
corrupted by a live copy.  And I don't know enough about lvm to 
understand how you can drbd to the live partition while keeping 
snapshots of old copies.  I wouldn't have expected that to work.  Are 
they really layered correctly so the lvm copy-on-write business works?

-- 
Les Mikesell
 lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Timothy Murphy
I want to archive backuppc on machine A to machine B.
(Both are running CentOS-5.6 .)
The problem is that backuppc has different UIDs on the 2 machines:
on A it is 101, on B it is 102.

Now when I NFS mount /archive on machine B on /archive on machine A
I am told that /archive belongs to avahi-autoipd ,
which has UID 102 on machine A.

This seems to prevent backuppc from archiving onto /archive .

Is there any simple way of changing a UID
(together with all the files it owns)?

Alternatively, is there a way of telling backuppc to ignore the UIDs?


--
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Les Mikesell
On 7/11/2011 4:55 PM, Timothy Murphy wrote:
> I want to archive backuppc on machine A to machine B.
> (Both are running CentOS-5.6 .)
> The problem is that backuppc has different UIDs on the 2 machines:
> on A it is 101, on B it is 102.

What do you mean by 'archive'?

> Now when I NFS mount /archive on machine B on /archive on machine A
> I am told that /archive belongs to avahi-autoipd ,
> which has UID 102 on machine A.

It shouldn't matter to the machine exporting the nfs directory whether 
there is a local user with the same uid or not.  Or are you trying to 
access the files from both machines?

> This seems to prevent backuppc from archiving onto /archive .

All you should need is write access (which might be from having the same 
owner at the top of the tree).  If you permit root nfs access from the 
backuppc client you can arrange the proper permissions from there.

> Is there any simple way of changing a UID
> (together with all the files it owns)?

You can't do both at once.  You can change the uid in the passwd file 
but your real problem is that some other package took the uid you want.

> Alternatively, is there a way of telling backuppc to ignore the UIDs?

No, but if you really only want one instance of backuppc running, it 
doesn't matter what the nfs-server side thinks about the owner's name.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Christian Völker
Hi Less,


On 11/07/2011 21:58, Les Mikesell wrote:
> BTW: The same would happen with the often so proposed "take off a disk
> of your RAID1".
> The way my 'take a disk off RAID1' works is that there are 3 spare 
> disks, with at least one always offsite in the rotation and another one 
> wouldn't be brought back if there was any reason to suspect that the 
> filesystem was corrupt as copied on the most recent.
I'm aware of the rotation there- it's just the same and only a question
on levels you do it. You have three disks and swap them at some time. I
take snapshots instead. In both cases it can happen a filesystem error
gets copied over, too. Unless we don't notice within our rotation cycle
we're lost.  It's the same- just as my physical box is located in the
same building and you have the manual step :) With enough bandwidth I'd
prefer to move my physical box out of this building but it's a question
of money.
I think I might move it to the garage, though :)

>
> I don't trust anything in the same building or anything that can be 
> corrupted by a live copy.  And I don't know enough about lvm to 
> understand how you can drbd to the live partition while keeping 
> snapshots of old copies.  I wouldn't have expected that to work.  Are 
> they really layered correctly so the lvm copy-on-write business works?
>
Yes, this works absolutely fine.  I have the physical disks as LVM
"physical volumes". Based on this created a LVM "logical volume". On
both hosts- the primary (virtual machine) one and on the second physical
one with just a single disk on it. On top of these identical LVM volumes
I've set up the drbd. Every write on the primary is transferred to the
secondary. Works pretty good. Taking a snapshot of the LVM volume
doesn't affect the drbd device at all.
Now let's assume I realize a fielsystem error (or a script wiped out
everything) it will be copied over to the secondary of course. Now I'd
stop drbd on both nodes, return to previous snapshot and tell the
secondary to be now primary. Start syncing and I have my data back. The
only thing I have to evaluate is to have the proper size of the
snapshot. If it fills up, the primary isn't affected.

Greetings

christian




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Carl Wilhelm Soderstrom
On 07/11 11:55 , Timothy Murphy wrote:
> I want to archive backuppc on machine A to machine B.
> (Both are running CentOS-5.6 .)
> The problem is that backuppc has different UIDs on the 2 machines:
> on A it is 101, on B it is 102.
> 
> Now when I NFS mount /archive on machine B on /archive on machine A
> I am told that /archive belongs to avahi-autoipd ,
> which has UID 102 on machine A.
> 
> This seems to prevent backuppc from archiving onto /archive .
> 
> Is there any simple way of changing a UID
> (together with all the files it owns)?

vipw then vigr to edit the UIDs in /etc/passwd and /etc/group. You will need
to do vipw -s and vigr -s to change the /etc/shadow and /etc/gshadow as
well.

Then use a command like 'find / -uid 102 -exec chown backuppc: {} \;' to
change the ownership of all the files owned by UID 102 to whatever UID
backuppc is. 

> Alternatively, is there a way of telling backuppc to ignore the UIDs?

No.
UIDs are the real identifying information that maps file ownership onto
users. When you see a name like 'backuppc' associated with a file, it's
because the 'ls' program (or whatever other one) did a lookup of the UID to
the name in /etc/passwd.

If you do a 'ls -n' it simply reports the UIDs and skips the name lookup
step.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Holger Parplies
Hi,

I agree with Les:

Les Mikesell wrote on 2011-07-11 17:18:07 -0500 [Re: [BackupPC-users] Different 
UID numbers for backuppc on 2 computers]:
> On 7/11/2011 4:55 PM, Timothy Murphy wrote:
> > I want to archive backuppc on machine A to machine B.
> > (Both are running CentOS-5.6 .)
> > The problem is that backuppc has different UIDs on the 2 machines:
> > on A it is 101, on B it is 102.
> 
> What do you mean by 'archive'?

If you could describe what you are trying to do (in a way people can
understand), that would help giving meaningful answers.

Do you want to
- create an archive of a host (BackupPC_archive) and store that on an NFS
  export of machine B?
- copy your pool from machine A to machine B?
- use an NFS export of machine B as pool FS for a BackupPC instance running
  on machine A?
- tar together your BackupPC installation on machine A and store the tar
  file on machine B, in case you might decide to use it again?
- do something completely different?

In none of the first four cases is it a problem that the backuppc user has
different UIDs on the two machines, so it must be the last?

> > Now when I NFS mount /archive on machine B on /archive on machine A
> > I am told that /archive belongs to avahi-autoipd ,

Deinstall avahi-autoipd. That's the only thing I believe it is good for,
except for frustrating admins that don't want it installed, yet don't want to
reinvent their packaging system's dependency mechanism.

Seriously, 'mount' gives you informational output about the owner of a
directory?

> It shouldn't matter to the machine exporting the nfs directory whether 
> there is a local user with the same uid or not.  Or are you trying to 
> access the files from both machines?

Well, the only point would be that avahi-autoipd would have access to the
pool, which might not be a good idea.

> > This seems to prevent backuppc from archiving onto /archive .
> 
> All you should need is write access (which might be from having the same 
> owner at the top of the tree).  If you permit root nfs access from the 
> backuppc client you can arrange the proper permissions from there.

I'm guessing (and I *hate* to do that) that you set up permissions
incorrectly. In fact, I don't see why you have a backuppc user on the NFS
server at all.

> > Is there any simple way of changing a UID
> > (together with all the files it owns)?
> 
> You can't do both at once.  You can change the uid in the passwd file 
> but your real problem is that some other package took the uid you want.

Well, you could change the UID of backuppc on the client (assuming the UID the
server uses is free), or you could change UIDs on both client and server to
a common value, that is free on both. Or you could use NIS. Or you could set
up UID mapping for NFS (I've never needed to do that, but I believe it is
possible). Or you could forget about the backuppc user on the NFS server,
though there actually *is* a point in having that user, namely to prevent
something else from allocating the same UID and thus gaining access to the
pool.

Carl Wilhelm Soderstrom wrote on 2011-07-11 17:19:09 -0500 [Re: 
[BackupPC-users] Different UID numbers for backuppc on 2 computers]:
> On 07/11 11:55 , Timothy Murphy wrote:
> > Is there any simple way of changing a UID
> > (together with all the files it owns)?
> 
> vipw then vigr to edit the UIDs in /etc/passwd and /etc/group. You will need
> to do vipw -s and vigr -s to change the /etc/shadow and /etc/gshadow as
> well.

There is no UID information in the shadow files, and unless you're also
worried about the group, you don't need /etc/group either ;-). If you are, you
probably need to change the group of the files, too.

> Then use a command like 'find / -uid 102 -exec chown backuppc: {} \;' to
> change the ownership of all the files owned by UID 102 to whatever UID
> backuppc is. 

Well I hope you don't have many files ... how about either
'chown -R backuppc:backuppc /archive' (assuming that's TopDir) - there are no
files under TopDir *not* belonging to backuppc, or at least there shouldn't be,
and there shouldn't be any files belonging to backuppc elsewhere (check with
find) - or 'find / -uid 102 -print0 | xargs -0 chown backuppc:backuppc'. Just
be careful about what you are doing. Is that the previous UID of avahi-autoipd?
Are there any files owned by that UID that are *not* part of BackupPC? Whenever
something is messed up, and you are trying to clean up the mess, try not to
make the mess bigger in the process ;-).

> > Alternatively, is there a way of telling backuppc to ignore the UIDs?

BackupPC doesn't really care about UIDs at this point. The kernel does. I
don't think you're asking "is there a way to tell the kernel to ignore file
system permissions".

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 

Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Holger Parplies
Hi,

Christian Völker wrote on 2011-07-12 00:17:57 +0200 [Re: [BackupPC-users] My 
Solution for "Off-Site"]:
> On 11/07/2011 21:58, Les Mikesell wrote:
> > The way my 'take a disk off RAID1' works is that there are 3 spare 
> > disks, with at least one always offsite in the rotation [...]
>
> I'm aware of the rotation there- it's just the same and only a question
> on levels you do it. You have three disks and swap them at some time. I
> take snapshots instead. In both cases it can happen a filesystem error
> gets copied over, too.

so, you're saying that you don't trust your file system, but you trust LVM to
keep 4 snapshots accurate for up to four weeks? I think I understand Les'
point (if he's making it) that a hardware-based "don't do anything" approach
is more reliable than a software-based "accumulate the information needed to
undo all my changes". But I also understand your point of "as long as it
works, it gives me three previous states to go back to".

> I think I might move it to the garage, though :)

I hope your data is well enough protected against theft in your garage.

> > [...] to understand how you can drbd to the live partition while keeping 
> > snapshots of old copies.  I wouldn't have expected that to work.  Are 
> > they really layered correctly so the lvm copy-on-write business works?

Why shouldn't it work? An LVM LV is just a block device. Why should the
snapshotting be in any way dependent on the type of data you have on top?

> Yes, this works absolutely fine.  [...] Taking a snapshot of the LVM volume
> doesn't affect the drbd device at all.

I'm just wondering whether you're unmounting the pool FS before the snapshot,
or if you're relying on it to be in a consistent state by itself. How much
testing have you done?

> The only thing I have to evaluate is to have the proper size of the
> snapshot.

Which, in itself, doesn't sound practical. Effectively, you are estimating
how much new data your backups for a week (or four weeks?) will contain.

I just hope you don't decide to implement a BackupPC fork with deduplication
implemented through LVM snapshots ;-).

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different UID numbers for backuppc on 2 computers

2011-07-11 Thread Timothy Murphy
Les Mikesell wrote:

>> Now when I NFS mount /archive on machine B on /archive on machine A
>> I am told that /archive belongs to avahi-autoipd ,
>> which has UID 102 on machine A.
> 
> It shouldn't matter to the machine exporting the nfs directory whether
> there is a local user with the same uid or not.

Thanks for the responses.
I see now that there was a mistake in /etc/exports on machine B.
The archive is (hopefully) presently under way.



-- 
Timothy Murphy  
e-mail: gayleard /at/ eircom.net
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for "Off-Site"

2011-07-11 Thread Christian Völker
On 12/07/2011 01:05, Holger Parplies wrote:
>
> so, you're saying that you don't trust your file system, but you trust LVM to
> keep 4 snapshots accurate for up to four weeks? I think I understand Les'
> point (if he's making it) that a hardware-based "don't do anything" approach
> is more reliable than a software-based "accumulate the information needed to
> undo all my changes". But I also understand your point of "as long as it
> works, it gives me three previous states to go back to".
Take it as you like. I never said I don't trust my filesystem. At least
you have to trust *something* or you'll end up in endless layers of
security.

We both have the possibility to roll-back to a point some weeks ago. If
LVM doesn't work as expected *or* Less disks getting broken during the
swap- it's just the same.
> I'm just wondering whether you're unmounting the pool FS before the snapshot,
> or if you're relying on it to be in a consistent state by itself. How much
> testing have you done?
You can perform tests multiple times- every time they are fine but in
case of emergency something else fails you haven't thought of
previously. Meaning: There's no point in testing if a not properly
closed filesystem is able to recover as you can't forsee it in any case.
I'm using ext3 with data=journal, so it should work fine even without
proper unmounting.


>> The only thing I have to evaluate is to have the proper size of the
>> snapshot.
> Which, in itself, doesn't sound practical. Effectively, you are estimating
> how much new data your backups for a week (or four weeks?) will contain.

I have to estimate how much data changes on the volume for a weeks time,
yes. Then I take a snapshot. And another week. So it's a one week
estimate. And why should this be an issue? The secondary it a 2TB disk
while the original is around 1TB. So the amount of data changing within
a four weeks time frame can be 100%. This is fine. Although from
monitoring it the change rate per week is far below 100GB

Greetings

Christian


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/