> Please please measure the performance overhead of this change.
> 

here.

> > > > > > I made a patch below and measured the time(average of 10 times) of 
> > > > > > kernel build
> > > > > > on tmpfs(make -j8 on 8 CPU machine with 2.6.33 defconfig).
> > > > > > 
> > > > > > <before>
> > > > > > - root cgroup: 190.47 sec
> > > > > > - child cgroup: 192.81 sec
> > > > > > 
> > > > > > <after>
> > > > > > - root cgroup: 191.06 sec
> > > > > > - child cgroup: 193.06 sec
> > > > > > 

<after2(local_irq_save/restore)>
- root cgroup: 191.42 sec
- child cgroup: 193.55 sec

hmm, I think it's in error range, but I can see a tendency by testing several 
times
that it's getting slower as I add additional codes. Using 
local_irq_disable()/enable()
except in mem_cgroup_update_file_mapped(it can be the only candidate to be 
called
with irq disabled in future) might be the choice.


Thanks,
Daisuke Nishimura.

On Tue, 9 Mar 2010 10:20:58 +0530, Balbir Singh <bal...@linux.vnet.ibm.com> 
wrote:
> * nishim...@mxp.nes.nec.co.jp <nishim...@mxp.nes.nec.co.jp> [2010-03-09 
> 10:29:28]:
> 
> > On Tue, 9 Mar 2010 09:19:14 +0900, KAMEZAWA Hiroyuki 
> > <kamezawa.hir...@jp.fujitsu.com> wrote:
> > > On Tue, 9 Mar 2010 01:12:52 +0100
> > > Andrea Righi <ari...@develer.com> wrote:
> > > 
> > > > On Mon, Mar 08, 2010 at 05:31:00PM +0900, KAMEZAWA Hiroyuki wrote:
> > > > > On Mon, 8 Mar 2010 17:07:11 +0900
> > > > > Daisuke Nishimura <nishim...@mxp.nes.nec.co.jp> wrote:
> > > > > 
> > > > > > On Mon, 8 Mar 2010 11:37:11 +0900, KAMEZAWA Hiroyuki 
> > > > > > <kamezawa.hir...@jp.fujitsu.com> wrote:
> > > > > > > On Mon, 8 Mar 2010 11:17:24 +0900
> > > > > > > Daisuke Nishimura <nishim...@mxp.nes.nec.co.jp> wrote:
> > > > > > > 
> > > > > > > > > But IIRC, clear_writeback is done under treelock.... No ?
> > > > > > > > > 
> > > > > > > > The place where NR_WRITEBACK is updated is out of tree_lock.
> > > > > > > > 
> > > > > > > >    1311 int test_clear_page_writeback(struct page *page)
> > > > > > > >    1312 {
> > > > > > > >    1313         struct address_space *mapping = 
> > > > > > > > page_mapping(page);
> > > > > > > >    1314         int ret;
> > > > > > > >    1315
> > > > > > > >    1316         if (mapping) {
> > > > > > > >    1317                 struct backing_dev_info *bdi = 
> > > > > > > > mapping->backing_dev_info;
> > > > > > > >    1318                 unsigned long flags;
> > > > > > > >    1319
> > > > > > > >    1320                 spin_lock_irqsave(&mapping->tree_lock, 
> > > > > > > > flags);
> > > > > > > >    1321                 ret = TestClearPageWriteback(page);
> > > > > > > >    1322                 if (ret) {
> > > > > > > >    1323                         
> > > > > > > > radix_tree_tag_clear(&mapping->page_tree,
> > > > > > > >    1324                                                 
> > > > > > > > page_index(page),
> > > > > > > >    1325                                                 
> > > > > > > > PAGECACHE_TAG_WRITEBACK);
> > > > > > > >    1326                         if 
> > > > > > > > (bdi_cap_account_writeback(bdi)) {
> > > > > > > >    1327                                 __dec_bdi_stat(bdi, 
> > > > > > > > BDI_WRITEBACK);
> > > > > > > >    1328                                 __bdi_writeout_inc(bdi);
> > > > > > > >    1329                         }
> > > > > > > >    1330                 }
> > > > > > > >    1331                 
> > > > > > > > spin_unlock_irqrestore(&mapping->tree_lock, flags);
> > > > > > > >    1332         } else {
> > > > > > > >    1333                 ret = TestClearPageWriteback(page);
> > > > > > > >    1334         }
> > > > > > > >    1335         if (ret)
> > > > > > > >    1336                 dec_zone_page_state(page, NR_WRITEBACK);
> > > > > > > >    1337         return ret;
> > > > > > > >    1338 }
> > > > > > > 
> > > > > > > We can move this up to under tree_lock. Considering memcg, all 
> > > > > > > our target has "mapping".
> > > > > > > 
> > > > > > > If we newly account bounce-buffers (for NILFS, FUSE, etc..), 
> > > > > > > which has no ->mapping,
> > > > > > > we need much more complex new charge/uncharge theory.
> > > > > > > 
> > > > > > > But yes, adding new lock scheme seems complicated. (Sorry Andrea.)
> > > > > > > My concerns is performance. We may need somehing new 
> > > > > > > re-implementation of
> > > > > > > locks/migrate/charge/uncharge.
> > > > > > > 
> > > > > > I agree. Performance is my concern too.
> > > > > > 
> > > > > > I made a patch below and measured the time(average of 10 times) of 
> > > > > > kernel build
> > > > > > on tmpfs(make -j8 on 8 CPU machine with 2.6.33 defconfig).
> > > > > > 
> > > > > > <before>
> > > > > > - root cgroup: 190.47 sec
> > > > > > - child cgroup: 192.81 sec
> > > > > > 
> > > > > > <after>
> > > > > > - root cgroup: 191.06 sec
> > > > > > - child cgroup: 193.06 sec
> > > > > > 
> > > > > > Hmm... about 0.3% slower for root, 0.1% slower for child.
> > > > > > 
> > > > > 
> > > > > Hmm...accepatable ? (sounds it's in error-range)
> > > > > 
> > > > > BTW, why local_irq_disable() ? 
> > > > > local_irq_save()/restore() isn't better ?
> > > > 
> > > > Probably there's not the overhead of saving flags? 
> > > maybe.
> > > 
> > > > Anyway, it would make the code much more readable...
> > > > 
> > > ok.
> > > 
> > > please go ahead in this direction. Nishimura-san, would you post an
> > > independent patch ? If no, Andrea-san, please.
> > > 
> > This is the updated version.
> > 
> > Andrea-san, can you merge this into your patch set ?
> > 
> 
> Please please measure the performance overhead of this change.
> 
> -- 
>       Three Cheers,
>       Balbir
_______________________________________________
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel

Reply via email to