Re: [BackupPC-users] Keeping servers in sync

2009-09-02 Thread dan
On Tue, Sep 1, 2009 at 12:05 AM, Les Mikesell lesmikes...@gmail.com wrote: Jim Leonard wrote: Les Mikesell wrote: With backuppc the issue is not so much fragmentation within a file as the distance between the directory entry, the inode, and the file content. When creating a new file,

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Les Mikesell
Jim Leonard wrote: Les Mikesell wrote: With backuppc the issue is not so much fragmentation within a file as the distance between the directory entry, the inode, and the file content. When creating a new file, filesystems generally attempt to allocate these close to each other, but when

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Les Mikesell
Jim Wilcoxson wrote: Hi Les - thanks for trying it out! It sounds like you are seeing about 300GB in 1200 minutes, or 4 minutes per GB. That's about what I see on average when backing up a real system initially. Yesterday I backed up 33GB on a G5 Mac (the Mac version isn't released yet),

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Jim Wilcoxson
On 9/1/09, Les Mikesell lesmikes...@gmail.com wrote: Jim Wilcoxson wrote: Hi Les - thanks for trying it out! It sounds like you are seeing about 300GB in 1200 minutes, or 4 minutes per GB. That's about what I see on average when backing up a real system initially. Yesterday I backed up

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Holger Parplies
Hi, Jim Wilcoxson wrote on 2009-08-31 08:08:48 -0400 [Re: [BackupPC-users] Keeping servers in sync]: [...] I did some reading today about BackupPC's storage layout and design. I haven't finished yet, but one thing stuck out: BackupPC_link reads the NewFileList written by BackupPC_dump

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 19:00:40 +0200 on Tuesday, September 1, 2009: Hi, Jim Wilcoxson wrote on 2009-08-31 08:08:48 -0400 [Re: [BackupPC-users] Keeping servers in sync]: [...] I did some reading today about BackupPC's storage layout and design. I haven't finished yet

Re: [BackupPC-users] Keeping servers in sync

2009-09-01 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 13:58:41 -0400 on Tuesday, September 1, 2009: It seems like a lot of issues with file-level BackupPC backups (both full and incremental) could be solved if we had the following: 1. No chain renumbering - either by using the full file md5sum or other

Re: [BackupPC-users] Keeping servers in sync

2009-08-31 Thread Jim Wilcoxson
On 8/30/09, Jeffrey J. Kosowsky backu...@kosowsky.org wrote: Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009: Jim Wilcoxson wrote: Michael - I have a new LInux/FreeBSD backup program, HashBackup, in beta that I believe will handle a large backuppc server. In

Re: [BackupPC-users] Keeping servers in sync

2009-08-31 Thread Michael Stowe
Another thing about BackupPC is that by my reading, new files are first written to the PC area, then pool links are created by BackupPC_link. This suggests that backing up the pool last might improve performance, because it is likely to be more fragmented. Let me just say ... huh? What

Re: [BackupPC-users] Keeping servers in sync

2009-08-31 Thread Les Mikesell
Michael Stowe wrote: Another thing about BackupPC is that by my reading, new files are first written to the PC area, then pool links are created by BackupPC_link. This suggests that backing up the pool last might improve performance, because it is likely to be more fragmented. Let me just

Re: [BackupPC-users] Keeping servers in sync

2009-08-31 Thread Jim Leonard
Les Mikesell wrote: With backuppc the issue is not so much fragmentation within a file as the distance between the directory entry, the inode, and the file content. When creating a new file, filesystems generally attempt to allocate these close to each other, but when you link an existing

Re: [BackupPC-users] Keeping servers in sync

2009-08-30 Thread Jeffrey J. Kosowsky
Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009: Jim Wilcoxson wrote: Michael - I have a new LInux/FreeBSD backup program, HashBackup, in beta that I believe will handle a large backuppc server. In tests, it will backup a single directory with 15M (empty)

Re: [BackupPC-users] Keeping servers in sync

2009-08-30 Thread dan
I was just thinking about syncing servers. What if we just made an effort to sync the pool/cpool directory and the config files and then for the rest of the files in the pc/ directory run a script on the backuppc side to discover all the hard links. They push that list to the copy and have the

Re: [BackupPC-users] Keeping servers in sync

2009-08-30 Thread Jeffrey J. Kosowsky
dan wrote at about 13:58:10 -0600 on Sunday, August 30, 2009: I was just thinking about syncing servers. What if we just made an effort to sync the pool/cpool directory and the config files and then for the rest of the files in the pc/ directory run a script on the backuppc side to

Re: [BackupPC-users] Keeping servers in sync

2009-08-30 Thread Les Mikesell
dan wrote: I was just thinking about syncing servers. What if we just made an effort to sync the pool/cpool directory and the config files and then for the rest of the files in the pc/ directory run a script on the backuppc side to discover all the hard links. They push that list to the

Re: [BackupPC-users] Keeping servers in sync

2009-08-29 Thread Jim Wilcoxson
On 8/28/09, Les Mikesell lesmikes...@gmail.com wrote: Jim Wilcoxson wrote: What does it use to map the hardlinks internally? Is this likely to remain correct if backuppc rebuilds collision chains in the pool during a copy - or even before the next incremental? I'm not very familiar with the

Re: [BackupPC-users] Keeping servers in sync

2009-08-28 Thread Les Mikesell
Jim Wilcoxson wrote: Michael - I have a new LInux/FreeBSD backup program, HashBackup, in beta that I believe will handle a large backuppc server. In tests, it will backup a single directory with 15M (empty) files/hardlinks, with 32000 hard links to each file, and can do the initial and

Re: [BackupPC-users] Keeping servers in sync

2009-08-28 Thread Jim Wilcoxson
Hi Les - thanks for trying it out! It sounds like you are seeing about 300GB in 1200 minutes, or 4 minutes per GB. That's about what I see on average when backing up a real system initially. Yesterday I backed up 33GB on a G5 Mac (the Mac version isn't released yet), and it took 110 minutes.

Re: [BackupPC-users] Keeping servers in sync

2009-08-28 Thread Les Mikesell
Jim Wilcoxson wrote: Hi Les - thanks for trying it out! It sounds like you are seeing about 300GB in 1200 minutes, or 4 minutes per GB. That's about what I see on average when backing up a real system initially. Yesterday I backed up 33GB on a G5 Mac (the Mac version isn't released yet),

Re: [BackupPC-users] Keeping servers in sync

2009-08-28 Thread Jim Wilcoxson
On 8/28/09, Les Mikesell lesmikes...@gmail.com wrote: Unfortunately I was testing on the same disk where I do a weekly image copy so I'll have to start over later - but I do have another place to try it. What does it use to map the hardlinks internally? Is this likely to remain correct if

Re: [BackupPC-users] Keeping servers in sync

2009-08-28 Thread Les Mikesell
Jim Wilcoxson wrote: What does it use to map the hardlinks internally? Is this likely to remain correct if backuppc rebuilds collision chains in the pool during a copy - or even before the next incremental? I'm not very familiar with the internals of BackupPC, but if you have a set of

Re: [BackupPC-users] Keeping servers in sync

2009-08-26 Thread Osburn, Michael
and support Subject: Re: [BackupPC-users] Keeping servers in sync Carl Wilhelm Soderstrom wrote: On 08/25 10:16 , Osburn, Michael wrote: I have a pair of backup servers that are backing up the same site. We are doing this so that in the event the backuppc server dies, we can easily restore it from

Re: [BackupPC-users] Keeping servers in sync

2009-08-26 Thread Osburn, Michael
for user discussion,questions and support Subject: Re: [BackupPC-users] Keeping servers in sync Michael - I have a new LInux/FreeBSD backup program, HashBackup, in beta that I believe will handle a large backuppc server. In tests, it will backup a single directory with 15M (empty) files/hardlinks

[BackupPC-users] Keeping servers in sync

2009-08-25 Thread Osburn, Michael
All, I have a pair of backup servers that are backing up the same site. We are doing this so that in the event the backuppc server dies, we can easily restore it from the other one. It used to be an easy task to keep them in sync when we had only a few hosts (30-40) being backed up but now that

Re: [BackupPC-users] Keeping servers in sync

2009-08-25 Thread Carl Wilhelm Soderstrom
On 08/25 10:16 , Osburn, Michael wrote: I have a pair of backup servers that are backing up the same site. We are doing this so that in the event the backuppc server dies, we can easily restore it from the other one. It used to be an easy task to keep them in sync when we had only a few hosts

Re: [BackupPC-users] Keeping servers in sync

2009-08-25 Thread Les Mikesell
Carl Wilhelm Soderstrom wrote: On 08/25 10:16 , Osburn, Michael wrote: I have a pair of backup servers that are backing up the same site. We are doing this so that in the event the backuppc server dies, we can easily restore it from the other one. It used to be an easy task to keep them in

Re: [BackupPC-users] Keeping servers in sync

2009-08-25 Thread Jim Wilcoxson
Michael - I have a new LInux/FreeBSD backup program, HashBackup, in beta that I believe will handle a large backuppc server. In tests, it will backup a single directory with 15M (empty) files/hardlinks, with 32000 hard links to each file, and can do the initial and incremental backups on this