Re: [PATCH v5 0/3] zram: add rw_page implementation for zram and clean up unnecessary parameter

2014-10-23 Thread karam.lee
On Thu, Oct 23, 2014 at 08:20:22AM +0900, Minchan Kim wrote:

Thank you for reply. Your advice was a big help!

I will send the fixed version.

> Hey Karam,
> 
> You could keep Acked-by/Reviewed-by gotten from me and Jerome
> in old versions if new version isn't changed heavily.
> It's credit of them and at least, you should respect it. :)
> 
> Please Cc Andrew for zram patches because he merges the patche
> into his tree (ie, mmotm).
> Oops but get_maintainer.pl -f drivers/block/zram doesn't say it. :(
> 
> Anyway, Thansk for nice work!
> 
> On Wed, Oct 22, 2014 at 05:44:46PM +0900, karam....@lge.com wrote:
> > From: "karam.lee" 
> > 
> > Recently rw_page block device operation has been added.
> > This patchset implements rw_page operation for zram block device
> > and does some clean-up.
> > 
> > Patches 1~2 are for clean-up.
> > Patch 3 is for implementation of rw_page operation.
> > With the rw_page operation, zram can do I/O without allocating a BIO.
> > It make zram can save time and memory.
> > 
> > karam.lee (3):
> >   zram: remove bio parameter from zram_bvec_rw().
> >   zram: change parameter from vaild_io_request()
> >   zram: implement rw_page operation of zram
> 
> Acked-by: Minchan Kim 
> 
> -- 
> Kind regards,
> Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v5 0/3] zram: add rw_page implementation for zram and clean up unnecessary parameter

2014-10-23 Thread karam.lee
On Thu, Oct 23, 2014 at 08:20:22AM +0900, Minchan Kim wrote:

Thank you for reply. Your advice was a big help!

I will send the fixed version.

 Hey Karam,
 
 You could keep Acked-by/Reviewed-by gotten from me and Jerome
 in old versions if new version isn't changed heavily.
 It's credit of them and at least, you should respect it. :)
 
 Please Cc Andrew for zram patches because he merges the patche
 into his tree (ie, mmotm).
 Oops but get_maintainer.pl -f drivers/block/zram doesn't say it. :(
 
 Anyway, Thansk for nice work!
 
 On Wed, Oct 22, 2014 at 05:44:46PM +0900, karam@lge.com wrote:
  From: karam.lee karam@lge.com
  
  Recently rw_page block device operation has been added.
  This patchset implements rw_page operation for zram block device
  and does some clean-up.
  
  Patches 1~2 are for clean-up.
  Patch 3 is for implementation of rw_page operation.
  With the rw_page operation, zram can do I/O without allocating a BIO.
  It make zram can save time and memory.
  
  karam.lee (3):
zram: remove bio parameter from zram_bvec_rw().
zram: change parameter from vaild_io_request()
zram: implement rw_page operation of zram
 
 Acked-by: Minchan Kim minc...@kernel.org
 
 -- 
 Kind regards,
 Minchan Kim
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 3/3] zram: implement rw_page operation of zram

2014-10-21 Thread karam.lee
On Tue, Oct 21, 2014 at 03:57:29PM +0200, Jerome Marchand wrote:
> On 10/21/2014 09:27 AM, karam@lge.com wrote:
> > From: "karam.lee" 
> > 
> > This patch implements rw_page operation for zram block device.
> > 
> > I implemented the feature in zram and tested it.
> > Test bed was the G2, LG electronic mobile device, whtich has msm8974
> > processor and 2GB memory.
> > With a memory allocation test program consuming memory, the system
> > generates swap.
> > And operating time of swap_write_page() was measured.
> > 
> > --
> > | |   operating time   | improvement |
> > | |  (20 runs average) | |
> > --
> > |with patch   |1061.15 us  |+2.4%|
> > --
> > |without patch|1087.35 us  | |
> > --
> > 
> > Each test(with paged_io,with BIO) result set shows normal distribution
> > and has equal variance.
> > I mean the two values are valid result to compare.
> > I can say operation with paged I/O(without BIO) is faster 2.4% with
> > confidence level 95%.
> > 
> > Signed-off-by: karam.lee 
> > ---
> >  drivers/block/zram/zram_drv.c |   38 ++
> >  1 file changed, 38 insertions(+)
> > 
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index 4565fdc..696f0b5 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -810,8 +810,46 @@ static void zram_slot_free_notify(struct block_device 
> > *bdev,
> > atomic64_inc(>stats.notify_free);
> >  }
> >  
> > +static int zram_rw_page(struct block_device *bdev, sector_t sector,
> > +  struct page *page, int rw)
> > +{
> > +   int offset, ret = 1;
> 
> Small nitpick, but why do you initialize ret to 1? It doesn't seem to be
> ever used (nor is 1 a valid return value AFAICT).
> 
> It otherwise looks good.
> 
> Acked-by: Jerome Marchand 
> 

Thank you for reply. I agree with your opinion.

It was my mistake to initialize ret to 1.

I will resend the fixed version.

> > +   u32 index;
> > +   struct zram *zram;
> > +   struct bio_vec bv;
> > +
> > +   zram = bdev->bd_disk->private_data;
> > +   if (!valid_io_request(zram, sector, PAGE_SIZE)) {
> > +   atomic64_inc(>stats.invalid_io);
> > +   ret = -EINVAL;
> > +   goto out;
> > +   }
> > +
> > +   down_read(>init_lock);
> > +   if (unlikely(!init_done(zram))) {
> > +   ret = -ENOMEM;
> > +   goto out_unlock;
> > +   }
> > +
> > +   index = sector >> SECTORS_PER_PAGE_SHIFT;
> > +   offset = sector & (SECTORS_PER_PAGE - 1) << SECTOR_SHIFT;
> > +
> > +   bv.bv_page = page;
> > +   bv.bv_len = PAGE_SIZE;
> > +   bv.bv_offset = 0;
> > +
> > +   ret = zram_bvec_rw(zram, , index, offset, rw);
> > +
> > +out_unlock:
> > +   up_read(>init_lock);
> > +out:
> > +   page_endio(page, rw, ret);
> > +   return ret;
> > +}
> > +
> >  static const struct block_device_operations zram_devops = {
> > .swap_slot_free_notify = zram_slot_free_notify,
> > +   .rw_page = zram_rw_page,
> > .owner = THIS_MODULE
> >  };
> >  
> > 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v3 3/3] zram: implement rw_page operation of zram

2014-10-21 Thread karam.lee
On Tue, Oct 21, 2014 at 03:57:29PM +0200, Jerome Marchand wrote:
 On 10/21/2014 09:27 AM, karam@lge.com wrote:
  From: karam.lee karam@lge.com
  
  This patch implements rw_page operation for zram block device.
  
  I implemented the feature in zram and tested it.
  Test bed was the G2, LG electronic mobile device, whtich has msm8974
  processor and 2GB memory.
  With a memory allocation test program consuming memory, the system
  generates swap.
  And operating time of swap_write_page() was measured.
  
  --
  | |   operating time   | improvement |
  | |  (20 runs average) | |
  --
  |with patch   |1061.15 us  |+2.4%|
  --
  |without patch|1087.35 us  | |
  --
  
  Each test(with paged_io,with BIO) result set shows normal distribution
  and has equal variance.
  I mean the two values are valid result to compare.
  I can say operation with paged I/O(without BIO) is faster 2.4% with
  confidence level 95%.
  
  Signed-off-by: karam.lee karam@lge.com
  ---
   drivers/block/zram/zram_drv.c |   38 ++
   1 file changed, 38 insertions(+)
  
  diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
  index 4565fdc..696f0b5 100644
  --- a/drivers/block/zram/zram_drv.c
  +++ b/drivers/block/zram/zram_drv.c
  @@ -810,8 +810,46 @@ static void zram_slot_free_notify(struct block_device 
  *bdev,
  atomic64_inc(zram-stats.notify_free);
   }
   
  +static int zram_rw_page(struct block_device *bdev, sector_t sector,
  +  struct page *page, int rw)
  +{
  +   int offset, ret = 1;
 
 Small nitpick, but why do you initialize ret to 1? It doesn't seem to be
 ever used (nor is 1 a valid return value AFAICT).
 
 It otherwise looks good.
 
 Acked-by: Jerome Marchand jmarc...@redhat.com
 

Thank you for reply. I agree with your opinion.

It was my mistake to initialize ret to 1.

I will resend the fixed version.

  +   u32 index;
  +   struct zram *zram;
  +   struct bio_vec bv;
  +
  +   zram = bdev-bd_disk-private_data;
  +   if (!valid_io_request(zram, sector, PAGE_SIZE)) {
  +   atomic64_inc(zram-stats.invalid_io);
  +   ret = -EINVAL;
  +   goto out;
  +   }
  +
  +   down_read(zram-init_lock);
  +   if (unlikely(!init_done(zram))) {
  +   ret = -ENOMEM;
  +   goto out_unlock;
  +   }
  +
  +   index = sector  SECTORS_PER_PAGE_SHIFT;
  +   offset = sector  (SECTORS_PER_PAGE - 1)  SECTOR_SHIFT;
  +
  +   bv.bv_page = page;
  +   bv.bv_len = PAGE_SIZE;
  +   bv.bv_offset = 0;
  +
  +   ret = zram_bvec_rw(zram, bv, index, offset, rw);
  +
  +out_unlock:
  +   up_read(zram-init_lock);
  +out:
  +   page_endio(page, rw, ret);
  +   return ret;
  +}
  +
   static const struct block_device_operations zram_devops = {
  .swap_slot_free_notify = zram_slot_free_notify,
  +   .rw_page = zram_rw_page,
  .owner = THIS_MODULE
   };
   
  
 
 


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/