Re: [BackupPC-users] BackupPC on client machine running on Fedora13

2011-08-05 Thread Tod Detre
you can find your answer here: http://lmgtfy.com/?q=backuppc+rsync -- Tod -- BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for c

Re: [BackupPC-users] BlackoutPeriods seemingly ignored

2011-06-02 Thread Tod Detre
On Thu, Jun 2, 2011 at 4:28 AM, martin f krafft wrote: > Yes, I understand the concept too, and it makes sense for an office. > However, I am using BackupPC chiefly to back up globally spread > machines. There are no office hours really. The only "office hours" > that exist are for the machines be

Re: [BackupPC-users] Best FS for BackupPC

2011-05-24 Thread Tod Detre
On Tue, May 24, 2011 at 5:52 AM, Marcos Lorenzo de Santiago wrote: > Which one do you think is best for BackupPC? One of the main reasons I like xfs is the tool xfs_copy. This will allow you to copy a backuppc pool filesystem quickly and efficiently. BackupPC pools have too many hardlinks to mak

Re: [BackupPC-users] Block-level rsync-like hashing dd? recommend xfs/lvm

2011-04-11 Thread Tod Detre
On Mon, Apr 11, 2011 at 11:17 AM, Jeffrey J. Kosowsky wrote: > Is this any better than using clonezilla with an ext2/3/4 partition on > an lvm-snapshot? (my understanding is that clonezilla "knows" to skip > unallocated blocks). I don't know anything about clonezilla, but it sounds similar. xfs o

Re: [BackupPC-users] Block-level rsync-like hashing dd? recommend xfs/lvm

2011-04-11 Thread Tod Detre
For backing up my BackupPC pool I've found that using xfs on top of lvm to be a great solution. lvm allows you to make a snapshot of the pool. This is nice if you only have a small window between when your servers stop backing up and your users workstations start. xfs has a great tool called xfs_

Re: [BackupPC-users] Ignore large files ?

2011-03-23 Thread Tod Detre
> I'm currently using lots of excludes to build up a backup over a slow link > in stages ..thanks to this list for this suggestion. > > I was wondering if there is an option for rysnc / backuppc to ignore files > say over 500Mbytes ? This would be most useful in getting the initial backup > in plac

Re: [BackupPC-users] host groups; was: feature request: description for machines, searchable

2011-03-22 Thread Tod Detre
> Agreed - I've always thought it would be nice if backuppc were aware of > hosts grouped on a network route as well and could separately limit the > concurrency within the those groups.   Maybe it could be generalized > with a concept of how much impact a run will have on total concurrency > (to l

Re: [BackupPC-users] IPv6 support

2011-03-16 Thread Tod Detre
> I'd have to look at the rest of the code, but if gethostbyname fails, and if > you remove the nmb stuff, you're left with a hostname.  The question is, does > the rest of the code work with a host name string rather than an IP address? It did when I tried it, but it was on my home network with

Re: [BackupPC-users] BackupPC 3.2 on SLES11 x64 SP1

2011-03-16 Thread Tod Detre
I have BackupPC 3.2 on SLES 11 and it is working fine. I think the issue with the premature end error is that sperl is not setuid by default. just run 'sudo chmod u+s /usr/bin/sperl5.10.0'. Be warned that this was done for security reasons. If you have BackupPC or any other perl scripts exposed to

Re: [BackupPC-users] IPv6 support

2011-03-15 Thread Tod Detre
Actually, that's not quite the whole picture. BackupPC does do dns name lookup and those calls are not IPv6 compatible. Once such instance is in bin/BackupPC_dump line 503. The gethostbyname() function (at least last time I tried to do IPv6 with BackupPC) does not support IPv6. This causes BackupP

Re: [BackupPC-users] Controlling Full Backup

2010-11-03 Thread Tod Detre
I just have a cron job that starts a full backup at a specific time and then have the full backup period slightly over 1 week. here's the line from crontab: 0 20 * * sat /usr/local/BackupPC/bin/BackupPC_serverMesg backup root 1 you have to put this in backuppc's crontab (or use su/sudo to cha

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread Tod Detre
This may be way to complicated, but couldn't you create a loopback filesystem that supports hardlinks in a file on amazon? I know you can do encrypted loopback fs. You could even do a journaling fs with the journal stored on a local device to help with the performance. --Tod On Wed, Nov 3, 2010 a

Re: [BackupPC-users] eSATA drive enclosure

2010-10-19 Thread Tod Detre
We recently bought two of these: http://www.govconnection.com/IPA/Shop/Product/Detail.htm?sku=10461177&cac=Result One of them is installed and works fine. The other arrived damaged and I'm waiting to get it replaced. The great thing about these enclosures is that they don't need any drive sleds s

[BackupPC-users] Nightly script trigger?

2010-10-18 Thread Tod Detre
Is there a way (other than editing the code) to trigger a script after BackupPC_Nightly finishes? We are setting up an off-site copy of our pool and I'd like to trigger this right after the Nightly script finishes. I can just set up a cronjob for a time that all of the backups are likely to be fini

Re: [BackupPC-users] BackupPC_compressPool removed?

2010-10-15 Thread Tod Detre
we are in this situation is I just moved from an old server with really slow processors to a new server. We didn't compress the data on the old server because of the cpu bottleneck, but now there should be plenty of horsepower. Thanks, Tod

[BackupPC-users] BackupPC_compressPool removed?

2010-10-14 Thread Tod Detre
The changelog states that BackupPC_compressPool was removed in 3.2.0beta0. Is there a replacement for this? I have an uncompressed pool I would like to convert to compressed. Thanks, Tod Detre -- Beautiful is writing