Re: [BackupPC-users] rsyncd on vista 64 bit

2009-12-09 Thread Erik Hjertén
Kameleon skrev:
> I am trying to setup the standalone rsyncd from the backuppc downloads 
> page on a 64 bit vista machine. I have done it already on about 5 32 
> bit machines. Only this one fails to start the service. I see no error 
> other than it trys to run and then nothing. Has anyone else ran into 
> this issue and found a workaround? I don't want to use smb if I can 
> help it. Thanks in advance.
I'm running cygwin bundled in Deltacopy on Vista 64. I'm not sure if the 
Deltacopy team altered the cygwin-dlls in some way, but it works very 
well. I'm doing daily backups to a Linux based server running Backuppc 
via rsyncd. Deltacopy can be found here: 
http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp

Kind regards
/Erik



--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Pat Rice
HI all
Well at the moment I an recovering from a flooding situation.
I had my office flooded to 2.5ft of water. Luckily the Backup server
(backup pc) was above the water line and also my hard drive for my
backup server. Unfortunately my machines that were on the ground, were
not so lucky. I have spent one or two days pouring water out of
machines, not a pleasant sight

I have a new hard drive as I am worried about the hard drive that was
already in the system, being exposed to the damp/water etc.

So I want to try and get the data off it, and reconnect it the server.
The Ideal situation would be a rsync copy of the server at a different
location (anyone have this done ?).

I had it set up as a LVM so /lib/backuppc/

What I would like to know, or if any on had any experience of:
Making a mirror or the backup disk:
Should I do a dd?
or would a copy be sufficient ?
or will I have to worry about hard links that need to be kept ?


Or should I just bite the bullet and put in a rsync server and take my
chances with the disk?
Any advice would be greatly received.

Thanks in advance
Pat

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncP problem

2009-12-09 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 14:11:12 -0600 on Monday, December 7, 2009:
 > It applies to full rsync or rsyncd backups.  An interrupted full should 
 > be marked as a 'partial' in your backup summary - and the subsequent 
 > full retry should not transfer the completed files again although it 
 > will take the time to to a block checksum compare over them.  I don't 
 > think it applies to incomplete files, so if you have one huge file that 
 > didn't finish I think it would retry from the start.   This and 
 > Conf{IncrLevels} are fairly recent additions - be sure you have a 
 > current backuppc version and the code and documentation match.   Even 
 > the current version won't find new or moved content if it exists in the 
 > pool, though.

Is there any reason the rsync option --partial couldn't be implemented
in perl-File-RsyncP (if not already there)? This would presumably
allow partial backups of single files to be resumed. Not sure how hard
it would be but intuitively, I wouldn't think it would be too hard.

This could be important when backing up large files (e.g., video,
databases, isos) and in particular over a slow link.







--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Alan McKay
See the recent thread about FC7 to FC10 upgrade, since this is what
was being discussed there.




-- 
“Don't eat anything you've ever seen advertised on TV”
 - Michael Pollan, author of "In Defense of Food"

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Exluding specific files within a directory tree

2009-12-09 Thread ckandreou

I have the following files
/cmroot/ems_src/view/2010_emsmadd.vws/.pid
/cmroot/ems_src/view/2010_deva.vws/.pid
/cmroot/ems_src/view/emsadmcm_01.03.006.vws/.pid
/ccdev10/cmroot/ems_src/vob/mems.vbs/.pid

I would like backuppc to exclude .pid 

I used the following exclude line with  .pl 
  '--exclude=/cmroot/ems_src/view/*/.pid',

I would appreciate if someone could confirm that is correct. If not, any advice 
on how to achieve it, would be great.

Thanks in advance.

Chris.

+--
|This was sent by christakisandr...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Omid
so this is a slightly different question about crontabing rsync backups to
an external usb drives.  i know this isn't strictly a backuppc question,
but...

the idea is to schedule an rsync command to an external drive say every
wednesday morning at 3 am, instruct the office to plug the drive in on
tuesday, and to replace it on thursday with next week's drive.

i have the rsync command down pat.  i'm using:

rsync -aHPp /data/ /mnt/usb/data/

i've realized that the trailing backslash is important .  to be
consistent anyways.

i've gotten the cronjob down pat, including the mount, stop and umount
commands.  what i'm having problems with is this.

if the usb drive does not mount for whatever reason (either because it
hasn't been plugged in, or for another reason), the copy is going to go to
the folder that's there, which is going to fill up the native drive very
quickly.

how can i avoid this?

i've tried the --no-dir command in rsync, hoping that it would prevent rsync
from happening if the destination folder doesn't exist.  but it doesn't seem
to work.

the only other option i seem to have is to create a script that confirms
that the mount has occurred before executing the rsync script.  got any
idea's?

thanks!!
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Tino Schwarze
On Wed, Dec 09, 2009 at 10:57:13AM -0800, Omid wrote:

[...]

> the idea is to schedule an rsync command to an external drive say every
> wednesday morning at 3 am, instruct the office to plug the drive in on
> tuesday, and to replace it on thursday with next week's drive.
> 
> i have the rsync command down pat.  i'm using:
> 
> rsync -aHPp /data/ /mnt/usb/data/
> 
> i've realized that the trailing backslash is important .  to be
> consistent anyways.
> 
> i've gotten the cronjob down pat, including the mount, stop and umount
> commands.  what i'm having problems with is this.
> 
> if the usb drive does not mount for whatever reason (either because it
> hasn't been plugged in, or for another reason), the copy is going to go to
> the folder that's there, which is going to fill up the native drive very
> quickly.
> 
> how can i avoid this?
 
> i've tried the --no-dir command in rsync, hoping that it would prevent rsync
> from happening if the destination folder doesn't exist.  but it doesn't seem
> to work.
> 
> the only other option i seem to have is to create a script that confirms
> that the mount has occurred before executing the rsync script.  got any
> idea's?

Just create a file called "THIS_IS_THE_USB_DRIVE" on the drive itself,
then let your cronjob check for it like this:

[ -f /mnt/usb/THIS_IS_THE_USB_DRIVE ] && rsync -aHPp /data/ /mnt/usb/data/

Of course, a script would be suitable - it might mail somebody, then
"while [ ! -f /mnt/usb/THIS_IS_THE_USB_DRIVE ] ; do sleep 1m ; done"
to wait for the drive to appear.

HTH,

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.tisc.de

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Chris Robertson
Pat Rice wrote:
> HI all
> Well at the moment I an recovering from a flooding situation.
> I had my office flooded to 2.5ft of water. Luckily the Backup server
> (backup pc) was above the water line and also my hard drive for my
> backup server. Unfortunately my machines that were on the ground, were
> not so lucky. I have spent one or two days pouring water out of
> machines, not a pleasant sight
>
> I have a new hard drive as I am worried about the hard drive that was
> already in the system, being exposed to the damp/water etc.
>
> So I want to try and get the data off it, and reconnect it the server.
> The Ideal situation would be a rsync copy of the server at a different
> location (anyone have this done ?).
>
> I had it set up as a LVM so /lib/backuppc/
>
> What I would like to know, or if any on had any experience of:
> Making a mirror or the backup disk:
> Should I do a dd?
> or would a copy be sufficient ?
> or will I have to worry about hard links that need to be kept ?
>
>
> Or should I just bite the bullet and put in a rsync server and take my
> chances with the disk?
> Any advice would be greatly received.
>
> Thanks in advance
> Pat
>   

See the FAQ section "Copying the pool" under the header "Other 
installation topics" 
http://backuppc.sourceforge.net/faq/BackupPC.html#other_installation_topics

Chris


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exluding specific files within a directory tree

2009-12-09 Thread Chris Robertson
ckandreou wrote:
> I have the following files
> /cmroot/ems_src/view/2010_emsmadd.vws/.pid
> /cmroot/ems_src/view/2010_deva.vws/.pid
> /cmroot/ems_src/view/emsadmcm_01.03.006.vws/.pid
> /ccdev10/cmroot/ems_src/vob/mems.vbs/.pid
>
> I would like backuppc to exclude .pid 
>
> I used the following exclude line with  .pl 
>   '--exclude=/cmroot/ems_src/view/*/.pid',
>   

Better would be using the built in configuration parameter 
$Conf{BackupFilesExclude} 
(http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_backupfilesexclude_)

> I would appreciate if someone could confirm that is correct. If not, any 
> advice on how to achieve it, would be great.
>   

$Conf{BackupFilesExclude} = '*/.pid';

Should match a file named ".pid" in any directory (at least if using 
rsync, rsyncd or tar.  I'm a bit fuzzy on the SMB matching).

> Thanks in advance.
>
> Chris.

Chris


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Holger Parplies
Hi,

Tino Schwarze wrote on 2009-12-09 20:50:35 +0100 [Re: [BackupPC-users] a 
slightly different question about rsync for?offsite backups]:
> On Wed, Dec 09, 2009 at 10:57:13AM -0800, Omid wrote:
> [...]
> > if the usb drive does not mount for whatever reason (either because it
> > hasn't been plugged in, or for another reason), the copy is going to go to
> > the folder that's there, which is going to fill up the native drive very
> > quickly.
> > 
> > how can i avoid this?
> [...]
> 
> Just create a file called "THIS_IS_THE_USB_DRIVE" on the drive itself,

... or a file "THIS_IS_THE_HOST_DRIVE" in the directory you are mounting to
(and invert the testing logic). Or, of course, read mountpoint(1) and do
something like

mountpoint -q /mnt/usb && rsync -aHPp /data/ /mnt/usb/data/

It all depends on what you want to make easy and what you want to guard
against.

All of that said, remember that rsyncing a BackupPC pool doesn't scale well
and may fail at some point in the future. Also, syncing a live pool will
probably not lead to a consistent copy. Depending on what you might be using
the copy for, that may or may not be a problem (restoring from backups
completed before starting the copy will probably work - though parallel chain
renumbering might mess things up (don't really know), but I wouldn't recommend
using it (or a copy of it) to continue backing up to).

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] convert from rsync to rsyncd

2009-12-09 Thread Holger Parplies
Hi,

Jeffrey J. Kosowsky wrote on 2009-12-09 02:33:35 -0500 [Re: [BackupPC-users] 
convert from rsync to rsyncd]:
> [...]
> That being said, I don't think "/" will pose a problem since BackupPC
> saves the rsync "/" share name as "f%2f" (mangled form) which is
> equivalent to an unmangled "%2f" share name which probably is allowed
> in rsyncd. So just call the equivalent rsyncd share name "%2f". I
> haven't tested this, so I may be missing something, but try it...

sure you're missing something. A file or directory name "%2f" would luckily
be mangled to "f%252f" - otherwise there could be file name collisions. See
sub fileNameEltMangle in BackupPC::Lib. So, no, "f%2f" is *not* equivalent to
an unmangled "%2f".

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] a slightly different question about rsync for offsite backups

2009-12-09 Thread Omid
indeed, these both seem like really good ways to approach this!

i would definitely stop the backuppc service beforehand, and start it again
afterwards.  i know this doesn't stop backups in progress, but it's not a
lot of computers, and by 3 am, all that needs to be done is done.  and the
rsync to external usb only takes a couple hours.

perhaps i should be adding the switch that deletes files that are no longer
present as well?  hmmm...

i do understand that this may stop working at some point, but right now, the
pool is only a few hundred gig's, running on a machine with 2 gig's of ram,
and tests that i've done haven't had a problem.  and i don't see the pool
growing much beyond that.

we'll see how it goes!

thanks.

On Wed, Dec 9, 2009 at 2:39 PM, Holger Parplies  wrote:

> Hi,
>
> Tino Schwarze wrote on 2009-12-09 20:50:35 +0100 [Re: [BackupPC-users] a
> slightly different question about rsync for?offsite backups]:
> > On Wed, Dec 09, 2009 at 10:57:13AM -0800, Omid wrote:
> > [...]
> > > if the usb drive does not mount for whatever reason (either because it
> > > hasn't been plugged in, or for another reason), the copy is going to go
> to
> > > the folder that's there, which is going to fill up the native drive
> very
> > > quickly.
> > >
> > > how can i avoid this?
> > [...]
> >
> > Just create a file called "THIS_IS_THE_USB_DRIVE" on the drive itself,
>
> ... or a file "THIS_IS_THE_HOST_DRIVE" in the directory you are mounting to
> (and invert the testing logic). Or, of course, read mountpoint(1) and do
> something like
>
>mountpoint -q /mnt/usb && rsync -aHPp /data/ /mnt/usb/data/
>
> It all depends on what you want to make easy and what you want to guard
> against.
>
> All of that said, remember that rsyncing a BackupPC pool doesn't scale well
> and may fail at some point in the future. Also, syncing a live pool will
> probably not lead to a consistent copy. Depending on what you might be
> using
> the copy for, that may or may not be a problem (restoring from backups
> completed before starting the copy will probably work - though parallel
> chain
> renumbering might mess things up (don't really know), but I wouldn't
> recommend
> using it (or a copy of it) to continue backing up to).
>
> Regards,
> Holger
>
>
> --
> Return on Information:
> Google Enterprise Search pays you back
> Get the facts.
> http://p.sf.net/sfu/google-dev2dev
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] making a mirror image of the backup pc disk

2009-12-09 Thread Holger Parplies
Hi,

Pat Rice wrote on 2009-12-09 11:04:37 + [[BackupPC-users] making a mirror 
image of the backup pc disk]:
> [...]
> What I would like to know, or if any on had any experience of:
> Making a mirror or the backup disk:

well, yes, it is an FAQ, but in short:

> Should I do a dd?

Yes, in your situation definitely.

> or would a copy be sufficient ?

Maybe, but not certain. You don't want to take the chance. If your pool is
reasonably sized, it will take longer (possibly by orders of magnitude; in
fact, it might not complete before running out of resources) than a 'dd'.
Only a small pool on a large disk would be faster to cp/rsync/tar/... than
to dd.

> or will I have to worry about hard links that need to be kept ?

Yes.

Regards,
Holger

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/