On Tue, 12 May 2020 17:02:50 +0200
Joerg Roedel wrote:
> On Mon, May 11, 2020 at 12:36:19PM -0700, Andy Lutomirski wrote:
> > I’m guessing the right solution is either your series or your series
> > plus preallocation on 64-bit. I’m just grumpy about it...
>
> Okay, so we can do the
On Mon, May 11, 2020 at 12:36:19PM -0700, Andy Lutomirski wrote:
> I’m guessing the right solution is either your series or your series
> plus preallocation on 64-bit. I’m just grumpy about it...
Okay, so we can do the pre-allocation when it turns out the pgd_list
lock-times become a problem on
On Mon, 11 May 2020 21:14:14 +0200
Joerg Roedel wrote:
> > Or maybe we don't want to defeature this much, or maybe the memory hit
> > from this preallocation will hurt little 2-level 32-bit systems too
> > much.
>
> It will certainly make Linux less likely to boot on low-memory x86-32
>
> On May 11, 2020, at 12:14 PM, Joerg Roedel wrote:
>
> On Mon, May 11, 2020 at 08:36:31AM -0700, Andy Lutomirski wrote:
>> What if we make 32-bit PTI depend on PAE?
>
> It already does, PTI support for legacy paging had to be removed because
> there were memory corruption problems with THP.
On Mon, May 11, 2020 at 08:36:31AM -0700, Andy Lutomirski wrote:
> What if we make 32-bit PTI depend on PAE?
It already does, PTI support for legacy paging had to be removed because
there were memory corruption problems with THP. The reason was that huge
PTEs in the user-space area were mapped in
On Mon, May 11, 2020 at 08:36:31AM -0700, Andy Lutomirski wrote:
> On Mon, May 11, 2020 at 12:42 AM Peter Zijlstra wrote:
> >
> > On Sat, May 09, 2020 at 12:05:29PM -0700, Andy Lutomirski wrote:
> >
> > > On x86_64, the only real advantage is that the handful of corner cases
> > > that make
On Mon, May 11, 2020 at 08:52:04AM -0700, Matthew Wilcox wrote:
> On Mon, May 11, 2020 at 09:31:34AM +0200, Peter Zijlstra wrote:
> > On Sat, May 09, 2020 at 06:11:57PM -0700, Matthew Wilcox wrote:
> > > Iterating an XArray (whether the entire thing
> > > or with marks) is RCU-safe and faster than
On Mon, May 11, 2020 at 08:52:04AM -0700, Matthew Wilcox wrote:
> On Mon, May 11, 2020 at 09:31:34AM +0200, Peter Zijlstra wrote:
> > On Sat, May 09, 2020 at 06:11:57PM -0700, Matthew Wilcox wrote:
> > > Iterating an XArray (whether the entire thing
> > > or with marks) is RCU-safe and faster than
On Mon, May 11, 2020 at 09:31:34AM +0200, Peter Zijlstra wrote:
> On Sat, May 09, 2020 at 06:11:57PM -0700, Matthew Wilcox wrote:
> > Iterating an XArray (whether the entire thing
> > or with marks) is RCU-safe and faster than iterating a linked list,
> > so this should solve the problem?
>
> It
On Mon, May 11, 2020 at 12:42 AM Peter Zijlstra wrote:
>
> On Sat, May 09, 2020 at 12:05:29PM -0700, Andy Lutomirski wrote:
>
> > On x86_64, the only real advantage is that the handful of corner cases
> > that make vmalloc faults unpleasant (mostly relating to vmap stacks)
> > go away. On
On Sat, May 09, 2020 at 12:05:29PM -0700, Andy Lutomirski wrote:
> On x86_64, the only real advantage is that the handful of corner cases
> that make vmalloc faults unpleasant (mostly relating to vmap stacks)
> go away. On x86_32, a bunch of mind-bending stuff (everything your
> series deletes
On Sat, May 09, 2020 at 06:11:57PM -0700, Matthew Wilcox wrote:
> Iterating an XArray (whether the entire thing
> or with marks) is RCU-safe and faster than iterating a linked list,
> so this should solve the problem?
It can hardly be faster if you want all elements -- which is I think the
case
On Sat, May 09, 2020 at 10:05:43PM -0700, Andy Lutomirski wrote:
> On Sat, May 9, 2020 at 2:57 PM Joerg Roedel wrote:
> I spent some time looking at the code, and I'm guessing you're talking
> about the 3-level !SHARED_KERNEL_PMD case. I can't quite figure out
> what's going on.
>
> Can you
On Sat, May 9, 2020 at 2:57 PM Joerg Roedel wrote:
>
> Hi Andy,
>
> On Sat, May 09, 2020 at 12:05:29PM -0700, Andy Lutomirski wrote:
> > So, unless I'm missing something here, there is an absolute maximum of
> > 512 top-level entries that ever need to be synchronized.
>
> And here is where your
On Sat, May 09, 2020 at 11:25:16AM +0200, Peter Zijlstra wrote:
> On Fri, May 08, 2020 at 11:34:07PM +0200, Joerg Roedel wrote:
> > On Fri, May 08, 2020 at 09:20:00PM +0200, Peter Zijlstra wrote:
> > > The only concern I have is the pgd_lock lock hold times.
> > >
> > > By not doing on-demand
Hi Andy,
On Sat, May 09, 2020 at 12:05:29PM -0700, Andy Lutomirski wrote:
> 1. Non-PAE. There is a single 4k top-level page table per mm, and
> this table contains either 512 or 1024 entries total. Of those
> entries, some fraction (half or less) control the kernel address
> space, and some
On Sat, May 9, 2020 at 10:52 AM Joerg Roedel wrote:
>
> On Fri, May 08, 2020 at 04:49:17PM -0700, Andy Lutomirski wrote:
> > On Fri, May 8, 2020 at 2:36 PM Joerg Roedel wrote:
> > >
> > > On Fri, May 08, 2020 at 02:33:19PM -0700, Andy Lutomirski wrote:
> > > > On Fri, May 8, 2020 at 7:40 AM
On Sat, May 09, 2020 at 11:25:16AM +0200, Peter Zijlstra wrote:
> Right; it's just that the moment you do trigger it, it'll iterate that
> pgd_list and that is potentially huge. Then again, that's not a new
> problem.
>
> I suppose we can deal with it if/when it becomes an actual problem.
>
> I
On Fri, May 08, 2020 at 04:49:17PM -0700, Andy Lutomirski wrote:
> On Fri, May 8, 2020 at 2:36 PM Joerg Roedel wrote:
> >
> > On Fri, May 08, 2020 at 02:33:19PM -0700, Andy Lutomirski wrote:
> > > On Fri, May 8, 2020 at 7:40 AM Joerg Roedel wrote:
> >
> > > What's the maximum on other system
On Fri, May 08, 2020 at 11:34:07PM +0200, Joerg Roedel wrote:
> Hi Peter,
>
> thanks for reviewing this!
>
> On Fri, May 08, 2020 at 09:20:00PM +0200, Peter Zijlstra wrote:
> > The only concern I have is the pgd_lock lock hold times.
> >
> > By not doing on-demand faults anymore, and
On Fri, May 8, 2020 at 2:36 PM Joerg Roedel wrote:
>
> On Fri, May 08, 2020 at 02:33:19PM -0700, Andy Lutomirski wrote:
> > On Fri, May 8, 2020 at 7:40 AM Joerg Roedel wrote:
>
> > What's the maximum on other system types? It might make more sense to
> > take the memory hit and pre-populate all
On Fri, May 08, 2020 at 02:33:19PM -0700, Andy Lutomirski wrote:
> On Fri, May 8, 2020 at 7:40 AM Joerg Roedel wrote:
> What's the maximum on other system types? It might make more sense to
> take the memory hit and pre-populate all the tables at boot so we
> never have to sync them.
Need to
Hi Peter,
thanks for reviewing this!
On Fri, May 08, 2020 at 09:20:00PM +0200, Peter Zijlstra wrote:
> The only concern I have is the pgd_lock lock hold times.
>
> By not doing on-demand faults anymore, and consistently calling
> sync_global_*(), we iterate that pgd_list thing much more often
On Fri, May 8, 2020 at 7:40 AM Joerg Roedel wrote:
>
> Hi,
>
> after the recent issue with vmalloc and tracing code[1] on x86 and a
> long history of previous issues related to the vmalloc_sync_mappings()
> interface, I thought the time has come to remove it. Please
> see [2], [3], and [4] for
On Fri, May 08, 2020 at 04:40:36PM +0200, Joerg Roedel wrote:
> Joerg Roedel (7):
> mm: Add functions to track page directory modifications
> mm/vmalloc: Track which page-table levels were modified
> mm/ioremap: Track which page-table levels were modified
> x86/mm/64: Implement
Hi,
after the recent issue with vmalloc and tracing code[1] on x86 and a
long history of previous issues related to the vmalloc_sync_mappings()
interface, I thought the time has come to remove it. Please
see [2], [3], and [4] for some other issues in the past.
The patches are based on v5.7-rc4
26 matches
Mail list logo