Hello,

On (08/22/16 16:25), Hui Zhu wrote:
> 
> Current ZRAM just can store all pages even if the compression rate
> of a page is really low.  So the compression rate of ZRAM is out of
> control when it is running.
> In my part, I did some test and record with ZRAM.  The compression rate
> is about 40%.
> 
> This series of patches make ZRAM can just store the page that the
> compressed size is smaller than a value.
> With these patches, I set the value to 2048 and did the same test with
> before.  The compression rate is about 20%.  The times of lowmemorykiller
> also decreased.

I haven't looked at the patches in details yet. can you educate me a bit?
is your test stable? why the number of lowmemorykill-s has decreased?
... or am reading "The times of lowmemorykiller also decreased" wrong?

suppose you have X pages that result in bad compression size (from zram
point of view). zram stores such pages uncompressed, IOW we have no memory
savings - swapped out page lands in zsmalloc PAGE_SIZE class. now you
don't try to store those pages in zsmalloc, but keep them as unevictable.
so the page still occupies PAGE_SIZE; no memory saving again. why did it
improve LMK?

        -ss

Reply via email to