Ok,
I think I have a hardware problem. I was able to back up the client to
a different backuppc server with no problems. Please ignore the
previous posts.
cheers,
ski
Ski Kacoroski wrote:
This works ok when I run tar just to a local file on the client:
tar -cvf test.tar /opt/www.
so it
This works ok when I run tar just to a local file on the client:
tar -cvf test.tar /opt/www.
so it seems to be a problem with ssh and tar together.
ski
Ski Kacoroski wrote:
Hi,
I am backing up a linux server via tar and it consistently breaks on
330MB video file with:
/opt/www/html/traini
Hi,
I am trying to use the rsync method to back up a fedora 3 linux machine
and consistently get this error (loglevel = 4):
opt/www/html/draw_pics/05/04/15/673182218_01_30387.jpg got digests
8ca8a63e98a9cad0e6bfcdcc49e28591 vs 8ca8a63e98a9cad0e6bfcdcc49e28591
create 502/502 21
Hi,
I am backing up a linux server via tar and it consistently breaks on
330MB video file with:
/opt/www/html/training_cd/lineitem_img/NWCC-IntroV'1.mpg
tar: Read 6144 bytes from -
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
I have tested by this by running $Conf
I have been struggling with this for a few weeks now...
Server: debian sarge: backuppc 2.1.1-2
Client: OSX Tiger 10.4.2 "Server"
using the new tiger tar, or xtar, I get the same results:
Everything transfers along just fine until it hits my netboot images.
It transfers about 6.5 gigs of a 12 gi
Using the --one-file-system option, the tar method fails to back up files in
the target subdirectory of a mounted partition. It backs up directories without
their files, unlike the rsync method using the --one-file-system option, which
works as expected.
I checked the log to verify that the tar
Just wondering if there were any thoughts on this, or if you need
further description or clarification of the issue...
Thanks!
--Chris
On Aug 29, 2005, at 4:47 PM, Chris Stone wrote:
Hi,
Using BackupPC 2.1.1, I have $Conf{DumpPostUserCmd} set to run a
script that creates an archive after
> I know that backuppc uses hard links internally. However what I wanted
> to know is how/whether it handles hard links in a share that it's
> backing up and restoring.
>
> Will a restored version contain hard links as the original share did?
it depends on your backup method.
> Would it
Hi,
I know that backuppc uses hard links internally. However what I wanted
to know is how/whether it handles hard links in a share that it's
backing up and restoring.
Will a restored version contain hard links as the original share did?
Would it work if I added --hard-links to $Conf{RsyncArgs}?
I would like to archive a backuppc client state on a given date. Hopefully this
would allow me to restore the files from that archive sometime in the future.
The only method I could think of was to create a zip or tar archive in restore
options, then store it somewhere.
The backuppc inter
Thanks for your answer Craig. Archive-host is the instance of my backup
server that takes care of archiving. Anyway, I dug up further I came across
a post about the archiveme.pl but I couldn't find it so I decided to create
my own. It creates an archive request file and runs the command I grabbed
f
Ralf Gross writes:
> Craig Barratt schrieb:
>
> > Ralf Gross writes:
> >
> > > I changed it to 7.1. If I want to disable full backups for a host for
> > > awhile, it is not sufficient to just comment out the crontab entry
> > > anymore, I have to remember to set $Conf{FullPeriod} to -1 again?
> >
"Christophe Faribault" writes:
> /backups/bin/BackupPC_archiveHost /backups/bin/BackupPC_tarCreate
> /usr/bin/split /usr/bin/par2 archive-host -1 /bin/gzip .gz 0 /mnt/exthd 0
> "*"
>
> But I get this when I run it:
>
> Writing tar archive for host archive-archive, backup #-1 to output file
> /m
13 matches
Mail list logo