Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-05 Thread Craig Barratt
Frank, Aha! I'm not insane! Definitely not. This appears to be a bug, and I'd like to get to the bottom of it. I suspect there is some meta data, most likely a file size, that isn't encoded correctly. Let's take this off list. No doubt the tar file is very large. You should try to find the

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-04 Thread Frank J . Gómez
Aha! I'm not insane! The original tar has the problem. To test it, I became the backuppc user and ran: /usr/share/backuppc/bin/BackupPC_tarCreate -t -h 62z62l1 -n -1 -s \* . /tmp/test.tar I then moved the tar over to my laptop (didn't want to expand the tar on the server) and checked the

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-04 Thread Frank J . Gómez
For the record, it appears to be the case that the corruption is specific to the one client. I tested a tar for another Windows 7 machine that's backed up via smb, and it was fine. Any thoughts? 2010/11/4 Frank J. Gómez fr...@crop-circle.net Aha! I'm not insane! The original tar has the

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread Frank J . Gómez
Craig, I started deconstructing my script last night before leaving work to see if the tar corruption was somehow my own fault. I got most of the way through without encountering problems, so I'm beginning to think I botched the redirection somewhere along the way. I'm out of the office today

[BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread gregwm
...saving to an Amazon s3 share... ...So you have a nice non-redundant repo, and you want to make it redundant before you push it over the net??? Talk sense man! The main question: == He thinks it would be more bandwidth-efficient to tar up and encrypt the pool, which accounts

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread gregwm
to copy my backuppc volume offsite i wrote a script to pick (from backupvolume/pc/*/backups) the 2 most recent incremental and the 2 most recent full backups from each backup set and rsync all that to the remote site. i'm ignoring (c)pool but the hardlinks still apply amongst the selected

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-03 Thread Tod Detre
This may be way to complicated, but couldn't you create a loopback filesystem that supports hardlinks in a file on amazon? I know you can do encrypted loopback fs. You could even do a journaling fs with the journal stored on a local device to help with the performance. --Tod On Wed, Nov 3, 2010

[BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-02 Thread Frank J . Gómez
A little background: == I've been hacking on a copy of BackupPC_archiveHost to run archives through GPG before saving them to disk. The reason for this is that, when I say saving to disk, I mean saving to an Amazon s3 share mounted locally via s3fs

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-02 Thread Les Mikesell
On 11/2/2010 2:42 PM, Frank J. Gómez wrote: A little background: == I've been hacking on a copy of BackupPC_archiveHost to run archives through GPG before saving them to disk. The reason for this is that, when I say saving to disk, I mean saving to an Amazon s3 share mounted

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-02 Thread Frank J . Gómez
Thanks for your response, Les. Regarding the hardlinks, I was thinking (perhaps incorrectly) that since I'd be putting an encrypted tar.gz on S3 (rather than all the individual files) that the hardlinking wouldn't be an issue and that the non-redundancy would be preserved in the tar. I don't see

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-02 Thread Les Mikesell
On 11/2/2010 4:22 PM, Frank J. Gómez wrote: Thanks for your response, Les. Regarding the hardlinks, I was thinking (perhaps incorrectly) that since I'd be putting an encrypted tar.gz on S3 (rather than all the individual files) that the hardlinking wouldn't be an issue and that the

Re: [BackupPC-users] more efficient: dump archives over the internet or copy the whole pool?

2010-11-02 Thread Craig Barratt
Frank, Anyway, I thought I had it all figured out, but when I decrypt, gunzip, and untar the resulting file, I get some tar: Skipping to next header messages in the output, and, although I do get some files out of the archive, eventually tar just hangs. Does the original tar archive