2017-05-19 23:19 GMT+03:00 Lionel Bouton <lionel-subscript...@bouton.name>:
> I was too focused on other problems and having a fresh look at what I
> wrote I'm embarrassed by what I read.
> Used pages for a given amount of data should be (amount / PAGE_SIZE) +
> ((amount % PAGE_SIZE) == 0 ? 0 : 1) this seems enough of a common thing
> to compute that the kernel might have a macro defined for this.

If i understand the code correctly, the logic of comparing the size of
input/output by bytes is enough (IMHO) for skipping the compression of
non-compressible data, and as btrfs uses PAGE_SIZE as a data cluster
size (and if i understand correctly it's minimum IO size), the logic
of that can be improved in case when compressed data use 126978 <
compressed_size < 131072.
The easiest way to improve that case, i think, is making the
compressed size bigger by PAGE_SIZE.

JFYI:
Once I've read on the list that btrfs don't compress data, if data are
less then PAGE_SIZE because it's useless (i didn't find that in the
code, so i think that i don't fully understand code of compress
routine)

After some time i got a idea that if btrfs determines store data
compressed or not by comparing input/output size of data (i.e. if
compressed size is bigger in compare to input data, don't store
compressed version), this logic can be improved by also checking if
compression profit is more then PAGE_SIZE, because if it's not,
compressed data will pass check and comsume the same amount of space
and as a result will not show any gain.

-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to