Re: [PATCH] guard mm->rss with page_table_lock (241p11)
Hi, On Mon, Feb 12, 2001 at 07:15:57PM -0800, george anzinger wrote: > Excuse me if I am off base here, but wouldn't an atomic operation be > better here. There are atomic inc/dec and add/sub macros for this. It > just seems that that is all that is needed here (from inspection of the > patch). The counter-argument is that we already hold the page table lock in the vast majority of places where the rss is modified, so overall it's cheaper to avoid the extra atomic update. Cheers, Stephen - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
On Mon, 12 Feb 2001, george anzinger wrote: > Excuse me if I am off base here, but wouldn't an atomic operation be > better here. There are atomic inc/dec and add/sub macros for this. It > just seems that that is all that is needed here (from inspection of the > patch). Most functions which touch mm->rss already hold mm->page_table_lock (also this functions are called more often and they use more CPU). Making those functions use an atomic instruction just to optimize the functions which do not lock mm->page_table_lock is not a good tradeoff. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://vger.kernel.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
Excuse me if I am off base here, but wouldn't an atomic operation be better here. There are atomic inc/dec and add/sub macros for this. It just seems that that is all that is needed here (from inspection of the patch). George Rasmus Andersen wrote: > > On Mon, Jan 29, 2001 at 07:30:01PM -0200, Rik van Riel wrote: > > On Mon, 29 Jan 2001, Rasmus Andersen wrote: > > > > > Please comment. Or else I will continue to sumbit it :) > > > > The following will hang the kernel on SMP, since you're > > already holding the spinlock here. Try compiling with > > CONFIG_SMP and see what happens... > > You are right. Sloppy research by me :( > > New patch below with the vmscan part removed. > > diff -aur linux-2.4.1-pre11-clean/mm/memory.c linux/mm/memory.c > --- linux-2.4.1-pre11-clean/mm/memory.c Sun Jan 28 20:53:13 2001 > +++ linux/mm/memory.c Sun Jan 28 22:43:04 2001 > @@ -377,7 +377,6 @@ > address = (address + PGDIR_SIZE) & PGDIR_MASK; > dir++; > } while (address && (address < end)); > - spin_unlock(&mm->page_table_lock); > /* > * Update rss for the mm_struct (not necessarily current->mm) > * Notice that rss is an unsigned long. > @@ -386,6 +385,7 @@ > mm->rss -= freed; > else > mm->rss = 0; > + spin_unlock(&mm->page_table_lock); > } > > > @@ -1038,7 +1038,9 @@ > flush_icache_page(vma, page); > } > > + spin_lock(&mm->page_table_lock); > mm->rss++; > + spin_unlock(&mm->page_table_lock); > > pte = mk_pte(page, vma->vm_page_prot); > > @@ -1072,7 +1074,9 @@ > return -1; > clear_user_highpage(page, addr); > entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot))); > + spin_lock(&mm->page_table_lock); > mm->rss++; > + spin_unlock(&mm->page_table_lock); > flush_page_to_ram(page); > } > set_pte(page_table, entry); > @@ -,7 +1115,9 @@ > return 0; > if (new_page == NOPAGE_OOM) > return -1; > + spin_lock(&mm->page_table_lock); > ++mm->rss; > + spin_unlock(&mm->page_table_lock); > /* > * This silly early PAGE_DIRTY setting removes a race > * due to the bad i386 page protection. But it's valid > diff -aur linux-2.4.1-pre11-clean/mm/mmap.c linux/mm/mmap.c > --- linux-2.4.1-pre11-clean/mm/mmap.c Sat Dec 30 18:35:19 2000 > +++ linux/mm/mmap.c Sun Jan 28 22:43:04 2001 > @@ -879,8 +879,8 @@ > spin_lock(&mm->page_table_lock); > mpnt = mm->mmap; > mm->mmap = mm->mmap_avl = mm->mmap_cache = NULL; > - spin_unlock(&mm->page_table_lock); > mm->rss = 0; > + spin_unlock(&mm->page_table_lock); > mm->total_vm = 0; > mm->locked_vm = 0; > while (mpnt) { > diff -aur linux-2.4.1-pre11-clean/mm/swapfile.c linux/mm/swapfile.c > --- linux-2.4.1-pre11-clean/mm/swapfile.c Fri Dec 29 23:07:24 2000 > +++ linux/mm/swapfile.c Sun Jan 28 22:43:04 2001 > @@ -231,7 +231,9 @@ > set_pte(dir, pte_mkdirty(mk_pte(page, vma->vm_page_prot))); > swap_free(entry); > get_page(page); > + spin_lock(&vma->vm_mm->page_table_lock); > ++vma->vm_mm->rss; > + spin_unlock(&vma->vm_mm->page_table_lock); > } > > static inline void unuse_pmd(struct vm_area_struct * vma, pmd_t *dir, > > -- > Rasmus([EMAIL PROTECTED]) > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://vger.kernel.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
David Howells writes: > Would it not be better to use some sort of atomic add/subtract/clear operation > rather than a spinlock? (Which would also give you fewer atomic memory access > cycles). Please see older threads about this, it has been discussed to death already (hint: sizeof(atomic_t), sizeof(unsigned long)). Later, David S. Miller [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
On Tue, Jan 30, 2001 at 08:18:56AM +, David Howells wrote: > >... > > + spin_lock(&mm->page_table_lock); > > mm->rss++; > > + spin_unlock(&mm->page_table_lock); > >... > > Would it not be better to use some sort of atomic add/subtract/clear operation > rather than a spinlock? (Which would also give you fewer atomic memory access > cycles). This will unfortunately not do for all platforms. Please read http://marc.theaimsgroup.com/?t=9763076813&w=2&r=1 for the last discussion of this. Regards, Rasmus - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
>... > + spin_lock(&mm->page_table_lock); > mm->rss++; > + spin_unlock(&mm->page_table_lock); >... Would it not be better to use some sort of atomic add/subtract/clear operation rather than a spinlock? (Which would also give you fewer atomic memory access cycles). David - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
On Mon, 29 Jan 2001, Rasmus Andersen wrote: > On Mon, Jan 29, 2001 at 07:30:01PM -0200, Rik van Riel wrote: > > On Mon, 29 Jan 2001, Rasmus Andersen wrote: > > > > > Please comment. Or else I will continue to sumbit it :) > > > > The following will hang the kernel on SMP, since you're > > already holding the spinlock here. Try compiling with > > CONFIG_SMP and see what happens... > > You are right. Sloppy research by me :( > > New patch below with the vmscan part removed. Have you bothered to also check the rest? Don't be surprised if there are more places like this. Please compile your kernel with CONFIG_SMP and test if your changes work before submitting them into the stable kernel. regards, Rik -- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose... http://www.surriel.com/ http://www.conectiva.com/ http://distro.conectiva.com.br/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
On Mon, Jan 29, 2001 at 07:30:01PM -0200, Rik van Riel wrote: > On Mon, 29 Jan 2001, Rasmus Andersen wrote: > > > Please comment. Or else I will continue to sumbit it :) > > The following will hang the kernel on SMP, since you're > already holding the spinlock here. Try compiling with > CONFIG_SMP and see what happens... You are right. Sloppy research by me :( New patch below with the vmscan part removed. diff -aur linux-2.4.1-pre11-clean/mm/memory.c linux/mm/memory.c --- linux-2.4.1-pre11-clean/mm/memory.c Sun Jan 28 20:53:13 2001 +++ linux/mm/memory.c Sun Jan 28 22:43:04 2001 @@ -377,7 +377,6 @@ address = (address + PGDIR_SIZE) & PGDIR_MASK; dir++; } while (address && (address < end)); - spin_unlock(&mm->page_table_lock); /* * Update rss for the mm_struct (not necessarily current->mm) * Notice that rss is an unsigned long. @@ -386,6 +385,7 @@ mm->rss -= freed; else mm->rss = 0; + spin_unlock(&mm->page_table_lock); } @@ -1038,7 +1038,9 @@ flush_icache_page(vma, page); } + spin_lock(&mm->page_table_lock); mm->rss++; + spin_unlock(&mm->page_table_lock); pte = mk_pte(page, vma->vm_page_prot); @@ -1072,7 +1074,9 @@ return -1; clear_user_highpage(page, addr); entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot))); + spin_lock(&mm->page_table_lock); mm->rss++; + spin_unlock(&mm->page_table_lock); flush_page_to_ram(page); } set_pte(page_table, entry); @@ -,7 +1115,9 @@ return 0; if (new_page == NOPAGE_OOM) return -1; + spin_lock(&mm->page_table_lock); ++mm->rss; + spin_unlock(&mm->page_table_lock); /* * This silly early PAGE_DIRTY setting removes a race * due to the bad i386 page protection. But it's valid diff -aur linux-2.4.1-pre11-clean/mm/mmap.c linux/mm/mmap.c --- linux-2.4.1-pre11-clean/mm/mmap.c Sat Dec 30 18:35:19 2000 +++ linux/mm/mmap.c Sun Jan 28 22:43:04 2001 @@ -879,8 +879,8 @@ spin_lock(&mm->page_table_lock); mpnt = mm->mmap; mm->mmap = mm->mmap_avl = mm->mmap_cache = NULL; - spin_unlock(&mm->page_table_lock); mm->rss = 0; + spin_unlock(&mm->page_table_lock); mm->total_vm = 0; mm->locked_vm = 0; while (mpnt) { diff -aur linux-2.4.1-pre11-clean/mm/swapfile.c linux/mm/swapfile.c --- linux-2.4.1-pre11-clean/mm/swapfile.c Fri Dec 29 23:07:24 2000 +++ linux/mm/swapfile.c Sun Jan 28 22:43:04 2001 @@ -231,7 +231,9 @@ set_pte(dir, pte_mkdirty(mk_pte(page, vma->vm_page_prot))); swap_free(entry); get_page(page); + spin_lock(&vma->vm_mm->page_table_lock); ++vma->vm_mm->rss; + spin_unlock(&vma->vm_mm->page_table_lock); } static inline void unuse_pmd(struct vm_area_struct * vma, pmd_t *dir, -- Rasmus([EMAIL PROTECTED]) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] guard mm->rss with page_table_lock (241p11)
On Mon, 29 Jan 2001, Rasmus Andersen wrote: > Please comment. Or else I will continue to sumbit it :) The following will hang the kernel on SMP, since you're already holding the spinlock here. Try compiling with CONFIG_SMP and see what happens... > diff -aur linux-2.4.1-pre11-clean/mm/vmscan.c linux/mm/vmscan.c > --- linux-2.4.1-pre11-clean/mm/vmscan.c Sun Jan 28 20:53:13 2001 > +++ linux/mm/vmscan.c Mon Jan 29 22:09:18 2001 > @@ -72,7 +72,9 @@ > swap_duplicate(entry); > set_pte(page_table, swp_entry_to_pte(entry)); > drop_pte: > + spin_lock(&mm->page_table_lock); > mm->rss--; > + spin_unlock(&mm->page_table_lock); > if (!page->age) > deactivate_page(page); > UnlockPage(page); regards, Rik -- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose... http://www.surriel.com/ http://www.conectiva.com/ http://distro.conectiva.com.br/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/