2017-01-09 2:09 GMT+01:00 Zygo Blaxell :
> On Wed, Jan 04, 2017 at 07:58:55AM -0500, Austin S. Hemmelgarn wrote:
>> On 2017-01-03 16:35, Peter Becker wrote:
>> >As i understand the duperemove source-code right (i work on/ try to
>> >improve this code since 5 or 6
On Wed, Jan 04, 2017 at 07:58:55AM -0500, Austin S. Hemmelgarn wrote:
> On 2017-01-03 16:35, Peter Becker wrote:
> >As i understand the duperemove source-code right (i work on/ try to
> >improve this code since 5 or 6 weeks on multiple parts), duperemove
> >does hashing and calculation before they
Thank you for the information / clarifications. This helps me to
understand the operation somewhat better. I will continue to deal with
the subject.
Regardless of this, i will change the structure of my data in my
usecase and put on rsync --inplace --no-whole-file.
2017-01-04 13:58 GMT+01:00
On 2017-01-03 16:35, Peter Becker wrote:
As i understand the duperemove source-code right (i work on/ try to
improve this code since 5 or 6 weeks on multiple parts), duperemove
does hashing and calculation before they call extend_same.
Duperemove stores all in a hashfile and read this. after all
On 04.01.2017 00:43 Hans van Kranenburg wrote:
> On 01/04/2017 12:12 AM, Peter Becker wrote:
>> Good hint, this would be an option and i will try this.
>>
>> Regardless of this the curiosity has packed me and I will try to
>> figure out where the problem with the low transfer rate is.
>>
>>
On 01/04/2017 12:12 AM, Peter Becker wrote:
> Good hint, this would be an option and i will try this.
>
> Regardless of this the curiosity has packed me and I will try to
> figure out where the problem with the low transfer rate is.
>
> 2017-01-04 0:07 GMT+01:00 Hans van Kranenburg
>
Good hint, this would be an option and i will try this.
Regardless of this the curiosity has packed me and I will try to
figure out where the problem with the low transfer rate is.
2017-01-04 0:07 GMT+01:00 Hans van Kranenburg :
> On 01/03/2017 08:24 PM, Peter
On 01/03/2017 08:24 PM, Peter Becker wrote:
> All invocations are justified, but not relevant in (offline) backup
> and archive scenarios.
>
> For example you have multiple version of append-only log-files or
> append-only db-files (each more then 100GB in size), like this:
>
>>
As i understand the duperemove source-code right (i work on/ try to
improve this code since 5 or 6 weeks on multiple parts), duperemove
does hashing and calculation before they call extend_same.
Duperemove stores all in a hashfile and read this. after all files
hashed, and duplicates detected, the
On 2017-01-03 15:20, Peter Becker wrote:
I think i understand. The resulting keyquestion is, how i can improve
the performance of extend_same ioctl.
I tested it with following results:
enviorment:
2 files, called "file", size each 100GB, duperemove nofiemap-options
set, 1MB extend size.
-- Forwarded message --
From: Austin S. Hemmelgarn <ahferro...@gmail.com>
Date: 2017-01-03 20:37 GMT+01:00
Subject: Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?
To: Peter Becker <floyd@gmail.com>
On 2017-01-03 14:21, Peter Becker wrote:
>
&g
I think i understand. The resulting keyquestion is, how i can improve
the performance of extend_same ioctl.
I tested it with following results:
enviorment:
2 files, called "file", size each 100GB, duperemove nofiemap-options
set, 1MB extend size.
duperemove output:
[0x1908590] (13889/72654) Try
All invocations are justified, but not relevant in (offline) backup
and archive scenarios.
For example you have multiple version of append-only log-files or
append-only db-files (each more then 100GB in size), like this:
> Snapshot_01_01_2017
-> file1.log .. 201 GB
> Snapshot_02_01_2017
->
On 2016-12-30 15:28, Peter Becker wrote:
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take more then a week (only
quot;Xin Zhou" <xin.z...@gmx.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?
> 1M is already a little bit too big in size.
Not in my usecase :)
Is it right the this isn't an limit in btrfs? So i can patch this
> achieved.
> 1M is already a little bit too big in size.
>
> Thanks,
> Xin
>
>
>
>
> Sent: Friday, December 30, 2016 at 12:28 PM
> From: "Peter Becker" <floyd@gmail.com>
> To: linux-btrfs <linux-btrfs@vger.kernel.org>
> Subject: [mark
kernel.org>
Subject: [markfasheh/duperemove] Why blocksize is limit to 1MB?
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take more then a week (only dedupe, previously hashed). So i tryed -b
100M
18 matches
Mail list logo