On Tue, Sep 1, 2009 at 12:05 AM, Les Mikesell lesmikes...@gmail.com wrote:
Jim Leonard wrote:
Les Mikesell wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file,
Jim Leonard wrote:
Les Mikesell wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each other, but when
Jim Wilcoxson wrote:
Hi Les - thanks for trying it out!
It sounds like you are seeing about 300GB in 1200 minutes, or 4
minutes per GB. That's about what I see on average when backing up a
real system initially. Yesterday I backed up 33GB on a G5 Mac (the
Mac version isn't released yet),
On 9/1/09, Les Mikesell lesmikes...@gmail.com wrote:
Jim Wilcoxson wrote:
Hi Les - thanks for trying it out!
It sounds like you are seeing about 300GB in 1200 minutes, or 4
minutes per GB. That's about what I see on average when backing up a
real system initially. Yesterday I backed up
Hi,
Jim Wilcoxson wrote on 2009-08-31 08:08:48 -0400 [Re: [BackupPC-users] Keeping
servers in sync]:
[...]
I did some reading today about BackupPC's storage layout and design.
I haven't finished yet, but one thing stuck out:
BackupPC_link reads the NewFileList written by BackupPC_dump
Holger Parplies wrote at about 19:00:40 +0200 on Tuesday, September 1, 2009:
Hi,
Jim Wilcoxson wrote on 2009-08-31 08:08:48 -0400 [Re: [BackupPC-users]
Keeping servers in sync]:
[...]
I did some reading today about BackupPC's storage layout and design.
I haven't finished yet
Jeffrey J. Kosowsky wrote at about 13:58:41 -0400 on Tuesday, September 1, 2009:
It seems like a lot of issues with file-level BackupPC backups (both
full and incremental) could be solved if we had the following:
1. No chain renumbering - either by using the full file md5sum or other
On 8/30/09, Jeffrey J. Kosowsky backu...@kosowsky.org wrote:
Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009:
Jim Wilcoxson wrote:
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In
Another thing about BackupPC is that by my reading, new files are
first written to the PC area, then pool links are created by
BackupPC_link. This suggests that backing up the pool last might
improve performance, because it is likely to be more fragmented.
Let me just say ... huh? What
Michael Stowe wrote:
Another thing about BackupPC is that by my reading, new files are
first written to the PC area, then pool links are created by
BackupPC_link. This suggests that backing up the pool last might
improve performance, because it is likely to be more fragmented.
Let me just
Les Mikesell wrote:
With backuppc the issue is not so much fragmentation within a file as
the distance between the directory entry, the inode, and the file
content. When creating a new file, filesystems generally attempt to
allocate these close to each other, but when you link an existing
Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009:
Jim Wilcoxson wrote:
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In tests, it
will backup a single directory with 15M (empty)
I was just thinking about syncing servers. What if we just made an effort
to sync the pool/cpool directory and the config files and then for the rest
of the files in the pc/ directory run a script on the backuppc side to
discover all the hard links. They push that list to the copy and have the
dan wrote at about 13:58:10 -0600 on Sunday, August 30, 2009:
I was just thinking about syncing servers. What if we just made an effort
to sync the pool/cpool directory and the config files and then for the rest
of the files in the pc/ directory run a script on the backuppc side to
dan wrote:
I was just thinking about syncing servers. What if we just made an
effort to sync the pool/cpool directory and the config files and then
for the rest of the files in the pc/ directory run a script on the
backuppc side to discover all the hard links. They push that list to
the
On 8/28/09, Les Mikesell lesmikes...@gmail.com wrote:
Jim Wilcoxson wrote:
What does it use to map the hardlinks internally? Is this
likely to remain correct if backuppc rebuilds collision chains in the
pool during a copy - or even before the next incremental?
I'm not very familiar with the
Jim Wilcoxson wrote:
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In tests, it
will backup a single directory with 15M (empty) files/hardlinks, with
32000 hard links to each file, and can do the initial and
Hi Les - thanks for trying it out!
It sounds like you are seeing about 300GB in 1200 minutes, or 4
minutes per GB. That's about what I see on average when backing up a
real system initially. Yesterday I backed up 33GB on a G5 Mac (the
Mac version isn't released yet), and it took 110 minutes.
Jim Wilcoxson wrote:
Hi Les - thanks for trying it out!
It sounds like you are seeing about 300GB in 1200 minutes, or 4
minutes per GB. That's about what I see on average when backing up a
real system initially. Yesterday I backed up 33GB on a G5 Mac (the
Mac version isn't released yet),
On 8/28/09, Les Mikesell lesmikes...@gmail.com wrote:
Unfortunately I was testing on the same disk where I do a weekly image
copy so I'll have to start over later - but I do have another place to
try it. What does it use to map the hardlinks internally? Is this
likely to remain correct if
Jim Wilcoxson wrote:
What does it use to map the hardlinks internally? Is this
likely to remain correct if backuppc rebuilds collision chains in the
pool during a copy - or even before the next incremental?
I'm not very familiar with the internals of BackupPC, but if you have
a set of
and support
Subject: Re: [BackupPC-users] Keeping servers in sync
Carl Wilhelm Soderstrom wrote:
On 08/25 10:16 , Osburn, Michael wrote:
I have a pair of backup servers that are backing up the same site. We
are doing this so that in the event the backuppc server dies, we can
easily restore it from
for user discussion,questions and support
Subject: Re: [BackupPC-users] Keeping servers in sync
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In tests, it
will backup a single directory with 15M (empty) files/hardlinks
All,
I have a pair of backup servers that are backing up the same site. We
are doing this so that in the event the backuppc server dies, we can
easily restore it from the other one. It used to be an easy task to keep
them in sync when we had only a few hosts (30-40) being backed up but
now that
On 08/25 10:16 , Osburn, Michael wrote:
I have a pair of backup servers that are backing up the same site. We
are doing this so that in the event the backuppc server dies, we can
easily restore it from the other one. It used to be an easy task to keep
them in sync when we had only a few hosts
Carl Wilhelm Soderstrom wrote:
On 08/25 10:16 , Osburn, Michael wrote:
I have a pair of backup servers that are backing up the same site. We
are doing this so that in the event the backuppc server dies, we can
easily restore it from the other one. It used to be an easy task to keep
them in
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
beta that I believe will handle a large backuppc server. In tests, it
will backup a single directory with 15M (empty) files/hardlinks, with
32000 hard links to each file, and can do the initial and incremental
backups on this
27 matches
Mail list logo