On 06/16/2016 08:14 AM, Lukasz Anaczkowski wrote:
> For reclaim this brings the performance back to before Mel's
> flushing changes, but for unmap it disables batching.
This turns out to be pretty catastrophic for unmap. In a workload that
uses, say 200 hardware threads and alloc/frees() a few MB
Lukasz Anaczkowski wrote:
> From: Andi Kleen
>
> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
> +{
> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) {
> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
This tracing seem
From: Andi Kleen
Knights Landing has an issue that a thread setting A or D bits
may not do so atomically against checking the present bit.
A thread which is going to page fault may still set those
bits, even though the present bit was already atomically cleared.
This implies that when the kernel
3 matches
Mail list logo