On Wed, 2016-06-15 at 22:33 -0700, Andy Lutomirski wrote:
>
> > > +++ b/arch/x86/mm/tlb.c
> > > @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct
> > > *prev, struct mm_struct *next,
> > > unsigned cpu = smp_processor_id();
> > >
> > > if (likely(prev != next)) {
> > > +
On Jun 15, 2016 9:32 PM, "Mika Penttilä" wrote:
>
> Hi,
>
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 5643fd0b1a7d..fbf036ae72ac 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -77,10 +77,25 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct
> >
Hi,
On 06/16/2016 03:28 AM, Andy Lutomirski wrote:
> This allows x86_64 kernels to enable vmapped stacks. There are a
> couple of interesting bits.
>
> First, x86 lazily faults in top-level paging entries for the vmalloc
> area. This won't work if we get a page fault while trying to access
> th
This allows x86_64 kernels to enable vmapped stacks. There are a
couple of interesting bits.
First, x86 lazily faults in top-level paging entries for the vmalloc
area. This won't work if we get a page fault while trying to access
the stack: the CPU will promote it to a double-fault and we'll die
4 matches
Mail list logo