Re: [BackupPC-users] My Solution for Off-Site

2011-07-25 Thread gimili
On 7/12/11 2:31 AM, Andrew Ford wrote:
 My setup is somewhat simpler.  My BackupPC datastore is stored on a
 700GB LVM logical volume on a 1TB disk, and I have 3 external 1TB eSATA
 disks.  Each week I make a snapshot of the BackupPC logical volume and
 dd the snapshot volume to one of the external disks (takes about 2
 hours) and then cmp the snapshot volume and the raw partition on the
 external disk.  The newest backup disk lives at home, I take the next
 oldest in to work and keep it in my desk, and bring the oldest disk home.

 Andrew


Simple is good.  I also use 3 external eSATA drives.  I haven't tried 
this yet but I was considering using a cron job to run the command line 
command that generates a large tar file of the backup.  I had a complete 
hard disk failure 7 months ago and this was how I moved the data from 
the backuppc machine to the new replacement drive on my server.  The 
only thing I would have to do is remember to cycle the disks.

Any comments on pros/cons to this method?

Thank you,

-- 
gimili


--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-25 Thread Carl Wilhelm Soderstrom
On 07/25 07:46 , gimili wrote:
 Simple is good.  I also use 3 external eSATA drives.  I haven't tried 
 this yet but I was considering using a cron job to run the command line 
 command that generates a large tar file of the backup.  I had a complete 
 hard disk failure 7 months ago and this was how I moved the data from 
 the backuppc machine to the new replacement drive on my server.  The 
 only thing I would have to do is remember to cycle the disks.
 
 Any comments on pros/cons to this method?

If you're using a script that generates archive files, one big advantage is
that they are independent of backuppc's storage layout. even without the
backup server you can just unpack them and have a working system (provided
your backups are set up that way).

I've got at least one site that is set up that way. The users onsite just
remember to swap a USB disk once a week; the script mounts it, deletes old
archives off it, and makes new archives to it, then unmounts the disk and
mails the script output to interested parties.

Let me know if you'd like the script. It's crude, but does work.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-25 Thread Andrew Ford
gimili wrote:
 On 7/12/11 2:31 AM, Andrew Ford wrote:
 My setup is somewhat simpler.  My BackupPC datastore is stored on a
 700GB LVM logical volume on a 1TB disk, and I have 3 external 1TB eSATA
 disks.  Each week I make a snapshot of the BackupPC logical volume and
 dd the snapshot volume to one of the external disks (takes about 2
 hours) and then cmp the snapshot volume and the raw partition on the
 external disk.  The newest backup disk lives at home, I take the next
 oldest in to work and keep it in my desk, and bring the oldest disk 
 home.

 Andrew


 Simple is good.  I also use 3 external eSATA drives.  I haven't tried 
 this yet but I was considering using a cron job to run the command 
 line command that generates a large tar file of the backup.  I had a 
 complete hard disk failure 7 months ago and this was how I moved the 
 data from the backuppc machine to the new replacement drive on my 
 server.  The only thing I would have to do is remember to cycle the 
 disks.

 Any comments on pros/cons to this method?

 Thank you,

Generating a tar file of the backup directory is going to walk the 
directory tree and read each of the files separately.  Doing a dd of 
the snapshot volume is reading the disk sequentially (although there may 
be a certain amount of out-of-order data if there has been any activity 
on the underlying volume).  Originally I was doing a recursive copy of 
the filesystem of the snapshot volume to the external disk, but that 
took much much longer than the block copy (I think it took something 
like 30 hours compared to 2).  I suspect that making a tar file is going 
to take something in between the two extremes.

Andrew

-- 
Andrew Ford
South Wing Compton House
Compton Green, Redmarley
Gloucester, GL19 3JB, UK
Telephone: +44 1531 829900
Mobile:+44 7785 258278


--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-12 Thread Christian Völker
On 12/07/2011 01:05, Holger Parplies wrote:

 so, you're saying that you don't trust your file system, but you trust LVM to
 keep 4 snapshots accurate for up to four weeks? I think I understand Les'
 point (if he's making it) that a hardware-based don't do anything approach
 is more reliable than a software-based accumulate the information needed to
 undo all my changes. But I also understand your point of as long as it
 works, it gives me three previous states to go back to.
Take it as you like. I never said I don't trust my filesystem. At least
you have to trust *something* or you'll end up in endless layers of
security.

We both have the possibility to roll-back to a point some weeks ago. If
LVM doesn't work as expected *or* Less disks getting broken during the
swap- it's just the same.
 I'm just wondering whether you're unmounting the pool FS before the snapshot,
 or if you're relying on it to be in a consistent state by itself. How much
 testing have you done?
You can perform tests multiple times- every time they are fine but in
case of emergency something else fails you haven't thought of
previously. Meaning: There's no point in testing if a not properly
closed filesystem is able to recover as you can't forsee it in any case.
I'm using ext3 with data=journal, so it should work fine even without
proper unmounting.


 The only thing I have to evaluate is to have the proper size of the
 snapshot.
 Which, in itself, doesn't sound practical. Effectively, you are estimating
 how much new data your backups for a week (or four weeks?) will contain.

I have to estimate how much data changes on the volume for a weeks time,
yes. Then I take a snapshot. And another week. So it's a one week
estimate. And why should this be an issue? The secondary it a 2TB disk
while the original is around 1TB. So the amount of data changing within
a four weeks time frame can be 100%. This is fine. Although from
monitoring it the change rate per week is far below 100GB

Greetings

Christian


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-12 Thread Andrew Ford
My setup is somewhat simpler.  My BackupPC datastore is stored on a 
700GB LVM logical volume on a 1TB disk, and I have 3 external 1TB eSATA 
disks.  Each week I make a snapshot of the BackupPC logical volume and 
dd the snapshot volume to one of the external disks (takes about 2 
hours) and then cmp the snapshot volume and the raw partition on the 
external disk.  The newest backup disk lives at home, I take the next 
oldest in to work and keep it in my desk, and bring the oldest disk home.

Andrew

On 11/07/11 19:43, Eduardo Díaz Rodríguez wrote:
 I have a similar situation but apply diferent way.

 One cluster two machines. one service (samba) the RAID1 software is
 used by drbd.

 Every cluster has one hard disk for backups (sda for data(drbd) and SO,
 and sdb backup-pc, and dump of the OS).

 the backup normaly is in local now I use rsyncd every server make a
 copy of the data using the IP of the cluster. rsync to IP of the
 cluster, and get de data.

 two same copys... :-)..

 On Sun, 10 Jul 2011 08:22:25 +0200, Christian Völker wrote:
 Hi,
 I just want to share my solution to keep an additional backup from
 the
 original BackupPC store.

 As we all know it's not really a good solution to rsync the BackupPC
 datastore to somewhere else- due to the hardlinks. Doing manual image
 copies (ie by swapping the drives of a RAID-1 array) has the big
 disadvantage as it's a manual step.

 So I decided to combine a couple of other techniques here:
 First, my BackupPC is running as a virtual machine on VMware ESX host
 sharing datastore and resources with the machines to back up. So the
 obvious disadvantage is the case when the ESX host fails- how should
 I
 restore this guy and the BackuPC machine? Well ESX is fairly stable
 but
 you never know.
 My storage uses in total 952GB of backup data. So it's really no good
 idea to do an rsync here. Swapping drives manually is no good either
 as
 the ESX host would complain.

 So what I did was to set up a physical small sized box (old desktop
 should work). No RAID involved. I installed there distributed remote
 block device (drbd- use Google). Same on backuppc machine. So I have
 a
 physical separated RAID1 available- just through network. Both drbd
 devices are using LVM volumes as backing devices so I can enlarge/
 shrink at will. The external server addditionaly uses the
 snaprotate.pl
 script to create 4 snapshots of the drbd device at weekly rate.
 The drbd device has BackuPC installed, too. So I can easily tell him
 to
 take over and restore.

 So with my setup I'm nearly prepared for everything at relatively low
 cost.
 -backupc itself fails
 +drbd one will take over after some minor (manual) steps.
 -backupc wipes out it's storage (script failure or file system issue)
 +I will roll back on the drbd to one of the previous LVM snapshots
 (up
 to four weeks back)
 -ESX host fails without removing backuppc
 +Set drbd as primary and restore ESX (or just reinstall, it's faster)
 -ESX host fails with wiping out the backuppc VM
 +Set drbd as primary and restore everything from there on

 So in summary I can easily keep my backuppc storage remotely in sync
 with drbd and keep snaphshots to roll back weeks. The initial sync
 and
 data migration to drbd device took 24hours while- you could reduce
 backuppc downtime by not doing the dd command in parallel to the
 initial
 sync.

 Only disadvantage is the physical drbd is currently in same building
 (my
 home) as the original ESX host. But this is not likely to change due
 to
 my low external uplink bandwidth here- someone could use the drbd
 proxy
 to use small lines for sync. But this proxy is not available as free
 software. For me it's fine- if my house burns down I have more
 serious
 issues than my BackupPC storage ;-)

 GReetings

 Christian





 --
 All of the data generated in your IT infrastructure is seriously
 valuable.
 Why? It contains a definitive record of application performance,
 security
 threats, fraudulent activity, and more. Splunk takes this data and
 makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:

Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Carl Wilhelm Soderstrom
On 07/10 08:22 , Christian Völker wrote:
 So what I did was to set up a physical small sized box (old desktop
 should work). No RAID involved. I installed there distributed remote
 block device (drbd- use Google). Same on backuppc machine. So I have a
 physical separated RAID1 available- just through network. Both drbd
 devices are using LVM volumes as backing devices so I can enlarge/
 shrink at will. The external server addditionaly uses the snaprotate.pl
 script to create 4 snapshots of the drbd device at weekly rate.

So you're running LVM and DRBD on the ESX guest machine?
Isn't that a notable performance hit? Especially keeping 4 snapshots
simultaneously?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Christian Völker
On 11/07/2011 16:15, Carl Wilhelm Soderstrom wrote:

 So you're running LVM and DRBD on the ESX guest machine?
 Isn't that a notable performance hit? Especially keeping 4 snapshots
 simultaneously?

I'm running LVM on a virtual machine and on top drbd, yes. But without
snapshots. The snapshots are taken only on the physical box. There's no
sense in taking snapshots on both. The virtual machine is stored on a
RAID10 on the ESX.

And performance? Well, I don't know. It backups up my around 20 machines
(mostly Linux) without any issues. I don't mind the backup or re-org of
the files taking 30% longer...it's still all done within 24hours.

Greetings

Christian




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Carl Wilhelm Soderstrom
On 07/11 07:54 , Christian Völker wrote:
 On 11/07/2011 16:15, Carl Wilhelm Soderstrom wrote:
 
  So you're running LVM and DRBD on the ESX guest machine?
  Isn't that a notable performance hit? Especially keeping 4 snapshots
  simultaneously?
 
 I'm running LVM on a virtual machine and on top drbd, yes. But without
 snapshots. The snapshots are taken only on the physical box. There's no
 sense in taking snapshots on both. The virtual machine is stored on a
 RAID10 on the ESX.
 
 And performance? Well, I don't know. It backups up my around 20 machines
 (mostly Linux) without any issues. I don't mind the backup or re-org of
 the files taking 30% longer...it's still all done within 24hours.

Ok. Thanks for letting me know. Glad it works for you in your environment.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Eduardo Díaz Rodríguez
I have a similar situation but apply diferent way.

One cluster two machines. one service (samba) the RAID1 software is 
used by drbd.

Every cluster has one hard disk for backups (sda for data(drbd) and SO, 
and sdb backup-pc, and dump of the OS).

the backup normaly is in local now I use rsyncd every server make a 
copy of the data using the IP of the cluster. rsync to IP of the 
cluster, and get de data.

two same copys... :-)..

On Sun, 10 Jul 2011 08:22:25 +0200, Christian Völker wrote:
 Hi,
 I just want to share my solution to keep an additional backup from 
 the
 original BackupPC store.

 As we all know it's not really a good solution to rsync the BackupPC
 datastore to somewhere else- due to the hardlinks. Doing manual image
 copies (ie by swapping the drives of a RAID-1 array) has the big
 disadvantage as it's a manual step.

 So I decided to combine a couple of other techniques here:
 First, my BackupPC is running as a virtual machine on VMware ESX host
 sharing datastore and resources with the machines to back up. So the
 obvious disadvantage is the case when the ESX host fails- how should 
 I
 restore this guy and the BackuPC machine? Well ESX is fairly stable 
 but
 you never know.
 My storage uses in total 952GB of backup data. So it's really no good
 idea to do an rsync here. Swapping drives manually is no good either 
 as
 the ESX host would complain.

 So what I did was to set up a physical small sized box (old desktop
 should work). No RAID involved. I installed there distributed remote
 block device (drbd- use Google). Same on backuppc machine. So I have 
 a
 physical separated RAID1 available- just through network. Both drbd
 devices are using LVM volumes as backing devices so I can enlarge/
 shrink at will. The external server addditionaly uses the 
 snaprotate.pl
 script to create 4 snapshots of the drbd device at weekly rate.
 The drbd device has BackuPC installed, too. So I can easily tell him 
 to
 take over and restore.

 So with my setup I'm nearly prepared for everything at relatively low 
 cost.
 -backupc itself fails
 +drbd one will take over after some minor (manual) steps.
 -backupc wipes out it's storage (script failure or file system issue)
 +I will roll back on the drbd to one of the previous LVM snapshots 
 (up
 to four weeks back)
 -ESX host fails without removing backuppc
 +Set drbd as primary and restore ESX (or just reinstall, it's faster)
 -ESX host fails with wiping out the backuppc VM
 +Set drbd as primary and restore everything from there on

 So in summary I can easily keep my backuppc storage remotely in sync
 with drbd and keep snaphshots to roll back weeks. The initial sync 
 and
 data migration to drbd device took 24hours while- you could reduce
 backuppc downtime by not doing the dd command in parallel to the 
 initial
 sync.

 Only disadvantage is the physical drbd is currently in same building 
 (my
 home) as the original ESX host. But this is not likely to change due 
 to
 my low external uplink bandwidth here- someone could use the drbd 
 proxy
 to use small lines for sync. But this proxy is not available as free
 software. For me it's fine- if my house burns down I have more 
 serious
 issues than my BackupPC storage ;-)

 GReetings

 Christian




 
 --
 All of the data generated in your IT infrastructure is seriously 
 valuable.
 Why? It contains a definitive record of application performance, 
 security
 threats, fraudulent activity, and more. Splunk takes this data and 
 makes
 sense of it. IT sense. And common sense.
 http://p.sf.net/sfu/splunk-d2d-c2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-- 
===

Si Jesús salva, Norton hace Backup.
-- Www.frases.com.

===

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Les Mikesell
On 7/11/2011 1:43 PM, Eduardo Díaz Rodríguez wrote:
 I have a similar situation but apply diferent way.

 One cluster two machines. one service (samba) the RAID1 software is
 used by drbd.

 Every cluster has one hard disk for backups (sda for data(drbd) and SO,
 and sdb backup-pc, and dump of the OS).

 the backup normaly is in local now I use rsyncd every server make a
 copy of the data using the IP of the cluster. rsync to IP of the
 cluster, and get de data.

 two same copys... :-)..

Without the lvm snapshots, isn't there a danger of something corrupting 
the master server's filesystem and having it propagate to the drbd copy 
instantly?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Christian Völker
On 11/07/2011 21:03, Les Mikesell wrote:

 Without the lvm snapshots, isn't there a danger of something corrupting 
 the master server's filesystem and having it propagate to the drbd copy 
 instantly?

You're absolutely right. And this is the reason why I have the LVM
snapshots. I can go back 5 weeks with the snapshots.  That's enough to
prevent any serious issues. When the file system gets unreadable I
usually notice it immediately- and roll back to previous snapshot.

BTW: The same would happen with the often so proposed take off a disk
of your RAID1. In some way you have to trust the filesystem. Of course
rsync'ing it from host A with ext3 to host B with XFS would be a better
solution security wise. But as you know rsync is not the best solution here.

I trust my file system at least for 5 weeks ;)

Greetings

Christian

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Les Mikesell
On 7/11/2011 2:13 PM, Christian Völker wrote:

 Without the lvm snapshots, isn't there a danger of something corrupting
 the master server's filesystem and having it propagate to the drbd copy
 instantly?

 You're absolutely right. And this is the reason why I have the LVM
 snapshots. I can go back 5 weeks with the snapshots.  That's enough to
 prevent any serious issues. When the file system gets unreadable I
 usually notice it immediately- and roll back to previous snapshot.

 BTW: The same would happen with the often so proposed take off a disk
 of your RAID1.

The way my 'take a disk off RAID1' works is that there are 3 spare 
disks, with at least one always offsite in the rotation and another one 
wouldn't be brought back if there was any reason to suspect that the 
filesystem was corrupt as copied on the most recent.

 In some way you have to trust the filesystem.

You have to trust that it works when it appears to be working.  You 
don't have to trust it to keep working through your next copy.

 Of course
 rsync'ing it from host A with ext3 to host B with XFS would be a better
 solution security wise. But as you know rsync is not the best solution here.

Even rsync'ing would leave you in a strange state if the source dies in 
mid-copy to your only target.

 I trust my file system at least for 5 weeks ;)

I don't trust anything in the same building or anything that can be 
corrupted by a live copy.  And I don't know enough about lvm to 
understand how you can drbd to the live partition while keeping 
snapshots of old copies.  I wouldn't have expected that to work.  Are 
they really layered correctly so the lvm copy-on-write business works?

-- 
Les Mikesell
 lesmikes...@gmail.com

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] My Solution for Off-Site

2011-07-11 Thread Holger Parplies
Hi,

Christian Völker wrote on 2011-07-12 00:17:57 +0200 [Re: [BackupPC-users] My 
Solution for Off-Site]:
 On 11/07/2011 21:58, Les Mikesell wrote:
  The way my 'take a disk off RAID1' works is that there are 3 spare 
  disks, with at least one always offsite in the rotation [...]

 I'm aware of the rotation there- it's just the same and only a question
 on levels you do it. You have three disks and swap them at some time. I
 take snapshots instead. In both cases it can happen a filesystem error
 gets copied over, too.

so, you're saying that you don't trust your file system, but you trust LVM to
keep 4 snapshots accurate for up to four weeks? I think I understand Les'
point (if he's making it) that a hardware-based don't do anything approach
is more reliable than a software-based accumulate the information needed to
undo all my changes. But I also understand your point of as long as it
works, it gives me three previous states to go back to.

 I think I might move it to the garage, though :)

I hope your data is well enough protected against theft in your garage.

  [...] to understand how you can drbd to the live partition while keeping 
  snapshots of old copies.  I wouldn't have expected that to work.  Are 
  they really layered correctly so the lvm copy-on-write business works?

Why shouldn't it work? An LVM LV is just a block device. Why should the
snapshotting be in any way dependent on the type of data you have on top?

 Yes, this works absolutely fine.  [...] Taking a snapshot of the LVM volume
 doesn't affect the drbd device at all.

I'm just wondering whether you're unmounting the pool FS before the snapshot,
or if you're relying on it to be in a consistent state by itself. How much
testing have you done?

 The only thing I have to evaluate is to have the proper size of the
 snapshot.

Which, in itself, doesn't sound practical. Effectively, you are estimating
how much new data your backups for a week (or four weeks?) will contain.

I just hope you don't decide to implement a BackupPC fork with deduplication
implemented through LVM snapshots ;-).

Regards,
Holger

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] My Solution for Off-Site

2011-07-10 Thread Christian Völker
Hi,
I just want to share my solution to keep an additional backup from the
original BackupPC store.

As we all know it's not really a good solution to rsync the BackupPC
datastore to somewhere else- due to the hardlinks. Doing manual image
copies (ie by swapping the drives of a RAID-1 array) has the big
disadvantage as it's a manual step.

So I decided to combine a couple of other techniques here:
First, my BackupPC is running as a virtual machine on VMware ESX host
sharing datastore and resources with the machines to back up. So the
obvious disadvantage is the case when the ESX host fails- how should I
restore this guy and the BackuPC machine? Well ESX is fairly stable but
you never know.
My storage uses in total 952GB of backup data. So it's really no good
idea to do an rsync here. Swapping drives manually is no good either as
the ESX host would complain.

So what I did was to set up a physical small sized box (old desktop
should work). No RAID involved. I installed there distributed remote
block device (drbd- use Google). Same on backuppc machine. So I have a
physical separated RAID1 available- just through network. Both drbd
devices are using LVM volumes as backing devices so I can enlarge/
shrink at will. The external server addditionaly uses the snaprotate.pl
script to create 4 snapshots of the drbd device at weekly rate.
The drbd device has BackuPC installed, too. So I can easily tell him to
take over and restore.

So with my setup I'm nearly prepared for everything at relatively low cost.
-backupc itself fails
+drbd one will take over after some minor (manual) steps.
-backupc wipes out it's storage (script failure or file system issue)
+I will roll back on the drbd to one of the previous LVM snapshots (up
to four weeks back)
-ESX host fails without removing backuppc
+Set drbd as primary and restore ESX (or just reinstall, it's faster)
-ESX host fails with wiping out the backuppc VM
+Set drbd as primary and restore everything from there on

So in summary I can easily keep my backuppc storage remotely in sync
with drbd and keep snaphshots to roll back weeks. The initial sync and
data migration to drbd device took 24hours while- you could reduce
backuppc downtime by not doing the dd command in parallel to the initial
sync.

Only disadvantage is the physical drbd is currently in same building (my
home) as the original ESX host. But this is not likely to change due to
my low external uplink bandwidth here- someone could use the drbd proxy
to use small lines for sync. But this proxy is not available as free
software. For me it's fine- if my house burns down I have more serious
issues than my BackupPC storage ;-)

GReetings

Christian




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/