Frank,
Aha! I'm not insane!
Definitely not.
This appears to be a bug, and I'd like to get to the bottom of it.
I suspect there is some meta data, most likely a file size, that
isn't encoded correctly.
Let's take this off list. No doubt the tar file is very large.
You should try to find the
Aha! I'm not insane!
The original tar has the problem. To test it, I became the backuppc user
and ran:
/usr/share/backuppc/bin/BackupPC_tarCreate -t -h 62z62l1 -n -1 -s \* .
/tmp/test.tar
I then moved the tar over to my laptop (didn't want to expand the tar on the
server) and checked the
For the record, it appears to be the case that the corruption is specific to
the one client. I tested a tar for another Windows 7 machine that's backed
up via smb, and it was fine. Any thoughts?
2010/11/4 Frank J. Gómez fr...@crop-circle.net
Aha! I'm not insane!
The original tar has the
Craig,
I started deconstructing my script last night before leaving work to see if
the tar corruption was somehow my own fault. I got most of the way through
without encountering problems, so I'm beginning to think I botched the
redirection somewhere along the way. I'm out of the office today
...saving to an Amazon s3 share...
...So you have a nice
non-redundant repo, and you want to make it redundant before you push it
over the net??? Talk sense man!
The main question:
==
He thinks it would be more bandwidth-efficient to tar up and encrypt the
pool, which accounts
to copy my backuppc volume offsite i wrote a script to pick
(from backupvolume/pc/*/backups) the 2 most recent incremental and the
2 most recent full backups from each backup set and rsync all that to the
remote site. i'm ignoring (c)pool but the hardlinks still apply amongst the
selected
This may be way to complicated, but couldn't you create a loopback
filesystem that supports hardlinks in a file on amazon? I know you can
do encrypted loopback fs. You could even do a journaling fs with the
journal stored on a local device to help with the performance.
--Tod
On Wed, Nov 3, 2010
A little background:
==
I've been hacking on a copy of BackupPC_archiveHost to run archives through
GPG before saving them to disk. The reason for this is that, when I say
saving to disk, I mean saving to an Amazon s3 share mounted locally via
s3fs
On 11/2/2010 2:42 PM, Frank J. Gómez wrote:
A little background:
==
I've been hacking on a copy of BackupPC_archiveHost to run archives
through GPG before saving them to disk. The reason for this is that,
when I say saving to disk, I mean saving to an Amazon s3 share mounted
Thanks for your response, Les.
Regarding the hardlinks, I was thinking (perhaps incorrectly) that since I'd
be putting an encrypted tar.gz on S3 (rather than all the individual files)
that the hardlinking wouldn't be an issue and that the non-redundancy would
be preserved in the tar.
I don't see
On 11/2/2010 4:22 PM, Frank J. Gómez wrote:
Thanks for your response, Les.
Regarding the hardlinks, I was thinking (perhaps incorrectly) that since
I'd be putting an encrypted tar.gz on S3 (rather than all the individual
files) that the hardlinking wouldn't be an issue and that the
Frank,
Anyway, I thought I had it all figured out, but when I decrypt, gunzip, and
untar the resulting file, I get some tar: Skipping to next header messages
in the output, and, although I do get some files out of the archive,
eventually tar just hangs.
Does the original tar archive
12 matches
Mail list logo