Jim Wilcoxson wrote:
>> What does it use to map the hardlinks internally? Is this
>> likely to remain correct if backuppc rebuilds collision chains in the
>> pool during a copy - or even before the next incremental?
>
> I'm not very familiar with the internals of BackupPC, but if you have
> a set
Michael Stowe wrote at about 14:48:16 -0500 on Thursday, August 20, 2009:
> Sort of -- VSS/cygwin has a weird catch-22 where the shadow copy has to
> exist before cygwin is launched or cygwin can't see the volume at all. In
> other words, ssh can start a shadow copy, but then any launched rsync
David wrote at about 10:51:55 +0200 on Thursday, August 20, 2009:
> Thanks for the replies so far :-) They were very informative.
>
> About BackupPC itself, I'm still evaluating whether or not to actually
> use it, but I'm starting to decide against it. Here are my reasons:
>
> 1) We're not
On 8/28/09, Les Mikesell wrote:
>
> Unfortunately I was testing on the same disk where I do a weekly image
> copy so I'll have to start over later - but I do have another place to
> try it. What does it use to map the hardlinks internally? Is this
> likely to remain correct if backuppc rebuilds
Jim Wilcoxson wrote:
> Hi Les - thanks for trying it out!
>
> It sounds like you are seeing about 300GB in 1200 minutes, or 4
> minutes per GB. That's about what I see on average when backing up a
> real system initially. Yesterday I backed up 33GB on a G5 Mac (the
> Mac version isn't released y
Hi Les - thanks for trying it out!
It sounds like you are seeing about 300GB in 1200 minutes, or 4
minutes per GB. That's about what I see on average when backing up a
real system initially. Yesterday I backed up 33GB on a G5 Mac (the
Mac version isn't released yet), and it took 110 minutes.
Ho
Jim Wilcoxson wrote:
> Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
> beta that I believe will handle a large backuppc server. In tests, it
> will backup a single directory with 15M (empty) files/hardlinks, with
> 32000 hard links to each file, and can do the initial and in
Craig Barratt wrote at about 10:48:25 -0700 on Friday, August 28, 2009:
> Jeffrey writes:
>
> > In general, using FullKeepCnt and IncrKeepCnt (and associated
> > variables) works well to prune older backups.
> >
> > But sometimes there is a *specific* older backup that you want to hang
> >
Jeffrey writes:
> In general, using FullKeepCnt and IncrKeepCnt (and associated
> variables) works well to prune older backups.
>
> But sometimes there is a *specific* older backup that you want to hang
> onto because it has some crucial data (or is a 'better' snapshot). It
> would be great if yo
Volker Thiel wrote:
> Am 27.08.2009 um 23:15 schrieb Chris Robertson:
>
>
>> Volker Thiel wrote:
>>
>>> Also, I'd like to know if there's a way to start BackupPC in daemon
>>> mode?
>>>
>> /path/to/installation/bin/BackupPC -d
>>
>
> Sometimes it is as simple as this. :) Where c
s...@users.sourceforge.net wrote at about 18:27:31 +0200 on Friday, August 28,
2009:
> On Fri, Aug 28, 2009 at 11:47:41AM -0400, Jeffrey J. Kosowsky wrote:
> > In general, using FullKeepCnt and IncrKeepCnt (and associated
> > variables) works well to prune older backups.
> >
> > But sometime
On Fri, Aug 28, 2009 at 11:47:41AM -0400, Jeffrey J. Kosowsky wrote:
> In general, using FullKeepCnt and IncrKeepCnt (and associated
> variables) works well to prune older backups.
>
> But sometimes there is a *specific* older backup that you want to hang
> onto because it has some crucial data (o
Le vendredi 28 août 2009 à 11:47 -0400, Jeffrey J. Kosowsky a écrit :
> In general, using FullKeepCnt and IncrKeepCnt (and associated
> variables) works well to prune older backups.
>
> But sometimes there is a *specific* older backup that you want to hang
> onto because it has some crucial data (
I like it...
steve
On Fri, Aug 28, 2009 at 11:47 AM, Jeffrey J.
Kosowsky wrote:
> In general, using FullKeepCnt and IncrKeepCnt (and associated
> variables) works well to prune older backups.
>
> But sometimes there is a *specific* older backup that you want to hang
> onto because it has some cruc
In general, using FullKeepCnt and IncrKeepCnt (and associated
variables) works well to prune older backups.
But sometimes there is a *specific* older backup that you want to hang
onto because it has some crucial data (or is a 'better' snapshot). It
would be great if you could tell BackupPC to keep
Nigel Kendrick wrote at about 09:21:03 +0100 on Wednesday, August 19, 2009:
> Morning,
>
> Happy to report that first MSSQL 700MB database dump across ADSL took 433
> mins to complete and resulted in a 130MB compressed file, but the night's
> first full backup took 22mins to sync the changes
Jim Leonard wrote at about 19:45:57 -0500 on Tuesday, August 18, 2009:
> Holger Parplies wrote:
> > first of all, where are you seeing these figures, and what are you
> measuring?
>
> Rather than try to convince you of my competence, I will offer up these
> benchmarks for the exact same e
After downloading this and trying it out, I noticed that these errors went
away:
Remote[1]: rsync: readlink "..." (in C) failed: File name too long (91)
So whatever version of rsync/cygwin that is, is well worth running for
that reason alone. With that, I now have a complete copy of 100% of the
Because this is a dump, a small change in a record near the beginning of the
sort would cause the whole file to be a different structure. If you can
influence the sort process for the dump and do it by date that the record
was updated then you wouldnt likely have this issue.
example (csv formated)
Hi Paul and thanks for your reply!
> In addition to the $args->{Host}, $t->{host} ambiguity, the ftp module
> is trying to print out OS errors instead of eval errors, which is
> causing misleading errors. Please patch your install directory with
> the attached patch (use the option -p1 as well).
I did have 2 raptor that I tested in a raid0. I noticed a small
improvement. I got a bigger improvement by using XFS or reiserfs rather
than ext3 if that can give you scale. I think that spindle count makes a
larger difference so I went with samsung F1 drives in a 10 disk raid10 for
my most rece
On Fri, Aug 28, 2009 at 11:55:35AM +0100, Nigel Kendrick wrote:
> Does backuppc support the --sparse flag for rsyncd remote backups -
> searching for answers led me to 'probably not' in an old post.
I don't know for sure, but I doubt it since BackupPC_dump will probably
just produce zeroes and co
Hi,
Does backuppc support the --sparse flag for rsyncd remote backups -
searching for answers led me to 'probably not' in an old post.
If it is supported, any benefit of using it with my famous database backup
dumps?
Thanks
Nigel
-
23 matches
Mail list logo