Le 19/05/2017 à 16:17, Lionel Bouton a écrit :
> Hi,
>
> Le 19/05/2017 à 15:38, Timofey Titovets a écrit :
>> If data compression didn't free at least one PAGE_SIZE, it useless to store 
>> that compressed extent
>>
>> Signed-off-by: Timofey Titovets <nefelim...@gmail.com>
>> ---
>>  fs/btrfs/lzo.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
>> index bd0b0938..637ef1b0 100644
>> --- a/fs/btrfs/lzo.c
>> +++ b/fs/btrfs/lzo.c
>> @@ -207,7 +207,7 @@ static int lzo_compress_pages(struct list_head *ws,
>>              }
>>  
>>              /* we're making it bigger, give up */
>> -            if (tot_in > 8192 && tot_in < tot_out) {
>> +            if (tot_in > 8192 && tot_in < tot_out + PAGE_SIZE) {
>>                      ret = -E2BIG;
>>                      goto out;
>>              }
> I'm not familiar with this code but I was surprised by the test : you
> would expect compression having a benefit when you are freeing an actual
> page not reducing data by a page size. So unless I don't understand the
> context shouldn't it be something like :
>
> if (tot_in > 8192 && ((tot_in % PAGE_SIZE) <= (tot_out % PAGE_SIZE))
>
> but looking at the code I see that this is in a while loop and there's
> another test just after the loop in the existing code :
>
>         if (tot_out > tot_in)
>                 goto out;
>
> There's a couple of things I don't understand but isn't this designed to
> stream data in small chunks through compression before writing it in the
> end ? So isn't this later test the proper location to detect if
> compression was beneficial ?
>
> You might not save a page early on in the while loop working on a subset
> of the data to compress but after enough data being processed you could
> save a page. It seems odd that your modification could abort compression
> early on although the same condition would become true after enough loops.
>
> Isn't what you want something like :
>
>         if (tot_out % PAGE_SIZE >= tot_in % PAGE_SIZE)
>                 goto out;
>
> after the loop ?
> The >= instead of > would avoid decompression in the case where the
> compressed data is smaller but uses the same space on disk.

I was too focused on other problems and having a fresh look at what I
wrote I'm embarrassed by what I read.
Used pages for a given amount of data should be (amount / PAGE_SIZE) +
((amount % PAGE_SIZE) == 0 ? 0 : 1) this seems enough of a common thing
to compute that the kernel might have a macro defined for this.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to