Jack wrote:
> Ok, I have not found this, and it must be 'out there' somewhere.
>
> I would like to be able to do a 'bear metal restore' for both Linux and
> Windows.
>
> I have done it using a commercial product on Windows (basics were:
> build a 'new' installation with just the minimal network b
Ok, I have not found this, and it must be 'out there' somewhere.
I would like to be able to do a 'bear metal restore' for both Linux and
Windows.
I have done it using a commercial product on Windows (basics were:
build a 'new' installation with just the minimal network basics, with the
c:\windows
>>> I've seen in some commercial backup systems with included
>>> deduplication (which often run under Linux :-), that files are
>>> split in
>>> 128k or 256k chunks prior to deduplication.
>>>
>>> It's nice to improve the deduplication ratio for big log files, mbox
>>> files, binary db not ofte
I'm feeling a bit dumb about Backuppc_zipCreate but I finally figured it
out. Some people have figured it out on the mailing list but never put
it in a really coherent statement so I thought i'd drop this tidbit on
everyone
Backuppc_zipCreate -h host -n dumpNum -c compressionlevel -s sharename
The --partial option seems like it would do the trick.
Can anyone comment as to how hard it would be to put it into File::RsyncP?
David.
-
On Wed, 9 Apr 2008, Les Mikesell wrote:
> David Birnbaum wrote:
>
>>>
>>> One approach is to use a VPN connection to the remote site. Openvpn has
>>
David Birnbaum wrote:
>>
>> One approach is to use a VPN connection to the remote site. Openvpn
>> has an option to do keepalives on the line - and to do lzo compression
>> on the data.
>
> These files are often compressed already - for example, one client has
> very large MP3 files that cont
On Wed, 9 Apr 2008, Les Mikesell wrote:
>> I've been using BackupPC for several years now, but one problem that I've
>> never come up with a good answer for is when a single large file is too big
>> to transfer completely in the time the backup can run before timing out.
>> For example, a 10M l
On Wed, Apr 09, 2008 at 09:20:13AM -0400, David Birnbaum wrote:
> Does anyone have a workaround or fix for this? Is it possible to change
> BackupPC so it doesn't remove the in-progress file, but instead
> copies it into the pool so rsync will pick up where it left off last time?
> There doesn't
Tino Schwarze wrote:
>
>> I've seen in some commercial backup systems with included
>> deduplication (which often run under Linux :-), that files are split in
>> 128k or 256k chunks prior to deduplication.
>>
>> It's nice to improve the deduplication ratio for big log files, mbox
>> files, binary d
Hi there,
I've got to write about this issue again. Since my upgrade to BackupPC
3.1.0, the BackupPC_nightly job doesn't seem to do anything. The log
looks like this:
2008-04-09 06:00:02 Next wakeup is 2008-04-09 22:00:00
2008-04-09 06:00:11 BackupPC_nightly now running BackupPC_sendEmail
2008-04
On Wed, Apr 09, 2008 at 04:32:13PM +0200, Ludovic Drolez wrote:
> I've seen in some commercial backup systems with included
> deduplication (which often run under Linux :-), that files are split in
> 128k or 256k chunks prior to deduplication.
>
> It's nice to improve the deduplication ratio for
Hi !
I've seen in some commercial backup systems with included
deduplication (which often run under Linux :-), that files are split in
128k or 256k chunks prior to deduplication.
It's nice to improve the deduplication ratio for big log files, mbox
files, binary db not often updated, etc. Only the
David Birnbaum wrote:
>
> I've been using BackupPC for several years now, but one problem that I've
> never
> come up with a good answer for is when a single large file is too big to
> transfer completely in the time the backup can run before timing out. For
> example, a 10M local datafile, b
Greetings,
I've been using BackupPC for several years now, but one problem that I've never
come up with a good answer for is when a single large file is too big to
transfer completely in the time the backup can run before timing out. For
example, a 10M local datafile, backing up over a 768k up
14 matches
Mail list logo