On Thu, Nov 16, 2017 at 10:20:42AM +0100, Michal Hocko wrote:
> On Wed 15-11-17 17:33:32, Will Deacon wrote:
> > Hi Michal,
> >
> > On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> > > From 7f0fcd2cab379ddac5611b2a520cdca8a77a235b Mon Sep 17 00:00:00 2001
> > > From: Michal Hocko
>
On Wed 15-11-17 17:33:32, Will Deacon wrote:
> Hi Michal,
>
> On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> > From 7f0fcd2cab379ddac5611b2a520cdca8a77a235b Mon Sep 17 00:00:00 2001
> > From: Michal Hocko
> > Date: Fri, 10 Nov 2017 11:27:17 +0100
> > Subject: [PATCH] arch, mm: in
On Thu 16-11-17 09:44:57, Minchan Kim wrote:
> On Wed, Nov 15, 2017 at 09:14:52AM +0100, Michal Hocko wrote:
> > On Mon 13-11-17 09:28:33, Minchan Kim wrote:
> > [...]
> > > void arch_tlb_gather_mmu(...)
> > >
> > > tlb->fullmm = !(start | (end + 1)) && atomic_read(&mm->mm_users)
> > > ==
On Wed, Nov 15, 2017 at 09:14:52AM +0100, Michal Hocko wrote:
> On Mon 13-11-17 09:28:33, Minchan Kim wrote:
> [...]
> > void arch_tlb_gather_mmu(...)
> >
> > tlb->fullmm = !(start | (end + 1)) && atomic_read(&mm->mm_users) ==
> > 0;
>
> Sorry, I should have realized sooner but this will
Hi Michal,
On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> From 7f0fcd2cab379ddac5611b2a520cdca8a77a235b Mon Sep 17 00:00:00 2001
> From: Michal Hocko
> Date: Fri, 10 Nov 2017 11:27:17 +0100
> Subject: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy
>
> 5a7862e83000 ("arm64:
On Mon 13-11-17 09:28:33, Minchan Kim wrote:
[...]
> void arch_tlb_gather_mmu(...)
>
> tlb->fullmm = !(start | (end + 1)) && atomic_read(&mm->mm_users) == 0;
Sorry, I should have realized sooner but this will not work for the oom
reaper. It _can_ race with the final exit_mmap and run with
On Tue, Nov 14, 2017 at 08:21:00AM +0100, Michal Hocko wrote:
> On Tue 14-11-17 10:45:49, Minchan Kim wrote:
> [...]
> > Anyway, I think Wang Nan's patch is already broken.
> > http://lkml.kernel.org/r/%3c20171107095453.179940-1-wangn...@huawei.com%3E
> >
> > Because unmap_page_range(ie, zap_pte_r
On Tue 14-11-17 10:45:49, Minchan Kim wrote:
[...]
> Anyway, I think Wang Nan's patch is already broken.
> http://lkml.kernel.org/r/%3c20171107095453.179940-1-wangn...@huawei.com%3E
>
> Because unmap_page_range(ie, zap_pte_range) can flush TLB forcefully
> and free pages. However, the architecture
On Mon, Nov 13, 2017 at 10:51:07AM +0100, Michal Hocko wrote:
> On Mon 13-11-17 09:28:33, Minchan Kim wrote:
> [...]
> > Thanks for the patch, Michal.
> > However, it would be nice to do it tranparently without asking
> > new flags from users.
> >
> > When I read tlb_gather_mmu's description, full
On Mon 13-11-17 09:28:33, Minchan Kim wrote:
[...]
> Thanks for the patch, Michal.
> However, it would be nice to do it tranparently without asking
> new flags from users.
>
> When I read tlb_gather_mmu's description, fullmm is supposed to
> be used only if there is no users and full address space
On Fri, Nov 10, 2017 at 01:26:35PM +0100, Michal Hocko wrote:
> On Fri 10-11-17 11:15:29, Michal Hocko wrote:
> > On Fri 10-11-17 09:19:33, Minchan Kim wrote:
> > > On Tue, Nov 07, 2017 at 09:54:53AM +, Wang Nan wrote:
> > > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual me
On Fri 10-11-17 11:15:29, Michal Hocko wrote:
> On Fri 10-11-17 09:19:33, Minchan Kim wrote:
> > On Tue, Nov 07, 2017 at 09:54:53AM +, Wang Nan wrote:
> > > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
> > > space. In this case, tlb->fullmm is true. Some archs like a
On Fri 10-11-17 09:19:33, Minchan Kim wrote:
> On Tue, Nov 07, 2017 at 09:54:53AM +, Wang Nan wrote:
> > tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
> > space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
> > flush TLB when tlb->fullmm is true:
>
On Tue, Nov 07, 2017 at 09:54:53AM +, Wang Nan wrote:
> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
> space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
> flush TLB when tlb->fullmm is true:
>
> commit 5a7862e83000 ("arm64: tlbflush: avoid fl
On Tue 07-11-17 09:54:53, Wang Nan wrote:
> tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
> space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
> flush TLB when tlb->fullmm is true:
>
> commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when full
tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
space. In this case, tlb->fullmm is true. Some archs like arm64 doesn't
flush TLB when tlb->fullmm is true:
commit 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").
Which makes leaking of tlb entries.
Wil
16 matches
Mail list logo