Hi,
I have a windows client that has been working fine for over a year and
now there are three files in the 6 - 7GB range that it just ignores. I
am using cygwin-rsyncd 2.6.6 and backuppc 2.1.2. I was able to force a
backup of one large file by excluding all other directories except for
the one
David writes:
> I took a closer look at the perl code and I see the cause of the problem.
> Please note I have no DNS. My PCs use DHCP, but are configured in BackupPC
> with the host table's DHCP flag set to zero.
>
> Here is what I think is happening:
>
> 1. BackupPC_dump is called periodically
Craig Barratt wrote:
>
> I don't think there is a way to transfer a single host's backups using
> BackupPC_tarPCCopy.
What happens if you just copy a single host's backup tree without regard
to the cpool links (assuming you have space)? Will subsequent runs put
links into cpool so the wasted s
Paul writes:
> In case this is of use to others, I tweaked the BackupPC_archiveStart
> script to properly (IMHO) deal with the ArchiveComp setting. While my
> coding style may be icky to some, I think my removal of the ".raw" file
> extension for uncompressed archive files may be of issue to oth
Matthias writes:
> I backup a windows client with rsyncd over ssh. I am pretty sure the ssh
> connection was interrupted at 23:27.
> In the /var/lib/backuppc/pc/st-ms-wv/XferLOG.0.z I found the error message:
> create 770 4294967295/4294967295 240986
> Help/Windows/de-DE/artcone.h1s
> Re
sabujp writes:
> In the last command that runs BackupPC_tarPCCopy, does this perl command look
> at any of the configuration files on the local host or does it just get what
> it needs to re-generate the hard links straight from the old "pc" directory?
> I looked through the code and don't see
On Mon, Feb 23, 2009 at 01:15:05PM -0600, Les Mikesell wrote:
> Tino Schwarze wrote:
> > The n+1-th incremental
> > compares against the most recent n-th incremental, therefore only
> > transfers the difference to that. But since mergine several incrementals
> > is expensive in terms of server I/O,
Tino Schwarze wrote:
> The n+1-th incremental
> compares against the most recent n-th incremental, therefore only
> transfers the difference to that. But since mergine several incrementals
> is expensive in terms of server I/O, you may limit the number of levels.
This is an option in recent versio
Hi all,
On Mon, Feb 23, 2009 at 01:25:00PM -0500, Jeffrey J. Kosowsky wrote:
> > I believe one of the main incremental backup issues is that they do not
> > detect deleted files. Incremental backups are also usually MUCH faster
> > than full backups. For example, one of my backups takes abou
Bowie Bailey wrote at about 12:26:16 -0500 on Wednesday, February 18, 2009:
> John Goerzen wrote:
> > Hi,
> >
> > I've been reading docs on BackupPC and I have a few questions about
> > how it works.
> >
> > First off, I gather that it keeps a hardlinked pool of data, so
> > whenever a fi
10 matches
Mail list logo