Re: [BackupPC-users] Bear Metal Restore

2008-04-09 Thread Les Mikesell
Jack wrote: > Ok, I have not found this, and it must be 'out there' somewhere. > > I would like to be able to do a 'bear metal restore' for both Linux and > Windows. > > I have done it using a commercial product on Windows (basics were: > build a 'new' installation with just the minimal network b

[BackupPC-users] Bear Metal Restore

2008-04-09 Thread Jack
Ok, I have not found this, and it must be 'out there' somewhere. I would like to be able to do a 'bear metal restore' for both Linux and Windows. I have done it using a commercial product on Windows (basics were: build a 'new' installation with just the minimal network basics, with the c:\windows

Re: [BackupPC-users] improving the deduplication ratio

2008-04-09 Thread Michael Barrow
>>> I've seen in some commercial backup systems with included >>> deduplication (which often run under Linux :-), that files are >>> split in >>> 128k or 256k chunks prior to deduplication. >>> >>> It's nice to improve the deduplication ratio for big log files, mbox >>> files, binary db not ofte

[BackupPC-users] Backuppc_zipCreate

2008-04-09 Thread Daniel Denson
I'm feeling a bit dumb about Backuppc_zipCreate but I finally figured it out. Some people have figured it out on the mailing list but never put it in a really coherent statement so I thought i'd drop this tidbit on everyone Backuppc_zipCreate -h host -n dumpNum -c compressionlevel -s sharename

Re: [BackupPC-users] Large file stalls backup

2008-04-09 Thread David Birnbaum
The --partial option seems like it would do the trick. Can anyone comment as to how hard it would be to put it into File::RsyncP? David. - On Wed, 9 Apr 2008, Les Mikesell wrote: > David Birnbaum wrote: > >>> >>> One approach is to use a VPN connection to the remote site. Openvpn has >>

Re: [BackupPC-users] Large file stalls backup

2008-04-09 Thread Les Mikesell
David Birnbaum wrote: >> >> One approach is to use a VPN connection to the remote site. Openvpn >> has an option to do keepalives on the line - and to do lzo compression >> on the data. > > These files are often compressed already - for example, one client has > very large MP3 files that cont

Re: [BackupPC-users] Large file stalls backup

2008-04-09 Thread David Birnbaum
On Wed, 9 Apr 2008, Les Mikesell wrote: >> I've been using BackupPC for several years now, but one problem that I've >> never come up with a good answer for is when a single large file is too big >> to transfer completely in the time the backup can run before timing out. >> For example, a 10M l

Re: [BackupPC-users] Large file stalls backup

2008-04-09 Thread John Rouillard
On Wed, Apr 09, 2008 at 09:20:13AM -0400, David Birnbaum wrote: > Does anyone have a workaround or fix for this? Is it possible to change > BackupPC so it doesn't remove the in-progress file, but instead > copies it into the pool so rsync will pick up where it left off last time? > There doesn't

Re: [BackupPC-users] improving the deduplication ratio

2008-04-09 Thread Les Mikesell
Tino Schwarze wrote: > >> I've seen in some commercial backup systems with included >> deduplication (which often run under Linux :-), that files are split in >> 128k or 256k chunks prior to deduplication. >> >> It's nice to improve the deduplication ratio for big log files, mbox >> files, binary d

[BackupPC-users] BackupPC_nightly does nothing

2008-04-09 Thread Tino Schwarze
Hi there, I've got to write about this issue again. Since my upgrade to BackupPC 3.1.0, the BackupPC_nightly job doesn't seem to do anything. The log looks like this: 2008-04-09 06:00:02 Next wakeup is 2008-04-09 22:00:00 2008-04-09 06:00:11 BackupPC_nightly now running BackupPC_sendEmail 2008-04

Re: [BackupPC-users] improving the deduplication ratio

2008-04-09 Thread Tino Schwarze
On Wed, Apr 09, 2008 at 04:32:13PM +0200, Ludovic Drolez wrote: > I've seen in some commercial backup systems with included > deduplication (which often run under Linux :-), that files are split in > 128k or 256k chunks prior to deduplication. > > It's nice to improve the deduplication ratio for

[BackupPC-users] improving the deduplication ratio

2008-04-09 Thread Ludovic Drolez
Hi ! I've seen in some commercial backup systems with included deduplication (which often run under Linux :-), that files are split in 128k or 256k chunks prior to deduplication. It's nice to improve the deduplication ratio for big log files, mbox files, binary db not often updated, etc. Only the

Re: [BackupPC-users] Large file stalls backup

2008-04-09 Thread Les Mikesell
David Birnbaum wrote: > > I've been using BackupPC for several years now, but one problem that I've > never > come up with a good answer for is when a single large file is too big to > transfer completely in the time the backup can run before timing out. For > example, a 10M local datafile, b

[BackupPC-users] Large file stalls backup

2008-04-09 Thread David Birnbaum
Greetings, I've been using BackupPC for several years now, but one problem that I've never come up with a good answer for is when a single large file is too big to transfer completely in the time the backup can run before timing out. For example, a 10M local datafile, backing up over a 768k up