you can find your answer here:
http://lmgtfy.com/?q=backuppc+rsync
--
Tod
--
BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA
The must-attend event for mobile developers. Connect with experts.
Get tools for c
On Thu, Jun 2, 2011 at 4:28 AM, martin f krafft wrote:
> Yes, I understand the concept too, and it makes sense for an office.
> However, I am using BackupPC chiefly to back up globally spread
> machines. There are no office hours really. The only "office hours"
> that exist are for the machines be
On Tue, May 24, 2011 at 5:52 AM, Marcos Lorenzo de Santiago
wrote:
> Which one do you think is best for BackupPC?
One of the main reasons I like xfs is the tool xfs_copy. This will
allow you to copy a backuppc pool filesystem quickly and efficiently.
BackupPC pools have too many hardlinks to mak
On Mon, Apr 11, 2011 at 11:17 AM, Jeffrey J. Kosowsky
wrote:
> Is this any better than using clonezilla with an ext2/3/4 partition on
> an lvm-snapshot? (my understanding is that clonezilla "knows" to skip
> unallocated blocks).
I don't know anything about clonezilla, but it sounds similar. xfs
o
For backing up my BackupPC pool I've found that using xfs on top of
lvm to be a great solution.
lvm allows you to make a snapshot of the pool. This is nice if you
only have a small window between when your servers stop backing up and
your users workstations start.
xfs has a great tool called xfs_
> I'm currently using lots of excludes to build up a backup over a slow link
> in stages ..thanks to this list for this suggestion.
>
> I was wondering if there is an option for rysnc / backuppc to ignore files
> say over 500Mbytes ? This would be most useful in getting the initial backup
> in plac
> Agreed - I've always thought it would be nice if backuppc were aware of
> hosts grouped on a network route as well and could separately limit the
> concurrency within the those groups. Maybe it could be generalized
> with a concept of how much impact a run will have on total concurrency
> (to l
> I'd have to look at the rest of the code, but if gethostbyname fails, and if
> you remove the nmb stuff, you're left with a hostname. The question is, does
> the rest of the code work with a host name string rather than an IP address?
It did when I tried it, but it was on my home network with
I have BackupPC 3.2 on SLES 11 and it is working fine. I think the
issue with the premature end error is that sperl is not setuid by
default. just run 'sudo chmod u+s /usr/bin/sperl5.10.0'. Be warned
that this was done for security reasons. If you have BackupPC or any
other perl scripts exposed to
Actually, that's not quite the whole picture. BackupPC does do dns
name lookup and those calls are not IPv6 compatible. Once such
instance is in bin/BackupPC_dump line 503. The gethostbyname()
function (at least last time I tried to do IPv6 with BackupPC) does
not support IPv6. This causes BackupP
I just have a cron job that starts a full backup at a specific time
and then have the full backup period slightly over 1 week.
here's the line from crontab:
0 20 * * sat /usr/local/BackupPC/bin/BackupPC_serverMesg backup
root 1
you have to put this in backuppc's crontab (or use su/sudo to cha
This may be way to complicated, but couldn't you create a loopback
filesystem that supports hardlinks in a file on amazon? I know you can
do encrypted loopback fs. You could even do a journaling fs with the
journal stored on a local device to help with the performance.
--Tod
On Wed, Nov 3, 2010 a
We recently bought two of these:
http://www.govconnection.com/IPA/Shop/Product/Detail.htm?sku=10461177&cac=Result
One of them is installed and works fine. The other arrived damaged and
I'm waiting to get it replaced.
The great thing about these enclosures is that they don't need any
drive sleds s
Is there a way (other than editing the code) to trigger a script after
BackupPC_Nightly finishes? We are setting up an off-site copy of our
pool and I'd like to trigger this right after the Nightly script
finishes. I can just set up a cronjob for a time that all of the
backups are likely to be fini
we are in this situation is I just moved from an old server
with really slow processors to a new server. We didn't compress the
data on the old server because of the cpu bottleneck, but now there
should be plenty of horsepower.
Thanks,
Tod
The changelog states that BackupPC_compressPool was removed in
3.2.0beta0. Is there a replacement for this? I have an uncompressed
pool I would like to convert to compressed.
Thanks,
Tod Detre
--
Beautiful is writing
16 matches
Mail list logo