Hi Matthew,

On Sat, Mar 11, 2017 at 06:56:40AM -0800, Matthew Wilcox wrote:
> On Mon, Feb 06, 2017 at 12:16:44AM +0900, Minchan Kim wrote:
> > +static inline void zram_fill_page(char *ptr, unsigned long len,
> > +                                   unsigned long value)
> > +{
> > +   int i;
> > +   unsigned long *page = (unsigned long *)ptr;
> > +
> > +   WARN_ON_ONCE(!IS_ALIGNED(len, sizeof(unsigned long)));
> > +
> > +   if (likely(value == 0)) {
> > +           memset(ptr, 0, len);
> > +   } else {
> > +           for (i = 0; i < len / sizeof(*page); i++)
> > +                   page[i] = value;
> > +   }
> > +}
> 
> I've hacked up memset32/memset64 for both ARM and x86 here:
> 
> http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/memfill

Thanks for the patch.

> 
> Can you do some performance testing and see if it makes a difference?

I tested that zram is *full* with non-zero 100M dedupable data(i.e.,
it's a ideal case) on x86. With this, I see 7% enhancement.
        
        perf stat -r 10 dd if=/dev/zram0 of=/dev/null

vanilla:        0.232050465 seconds time elapsed ( +-  0.51% )
memset_l:       0.217219387 seconds time elapsed ( +-  0.07% )

I doubt it makes such benefit in read workload which a small percent
non-zero dedup data(e.g., under 3%) but it makes code simple/perform
win.

Thanks.

> 
> At this point, I'd probably ask for the first 5 patches in that git
> branch to be included, and leave out memfill and the shoddy testsuite.
> 
> I haven't actually tested either asm implementation ... only the
> C fallback.

Reply via email to