On March 15, 2017 5:00 PM Aaron Lu wrote: 
>  void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned 
> long end)
>  {
> +     struct batch_free_struct *batch_free, *n;
> +
s/*n/*next/

>       tlb_flush_mmu(tlb);
> 
>       /* keep the page table cache within bounds */
>       check_pgt_cache();
> 
> +     list_for_each_entry_safe(batch_free, n, &tlb->worker_list, list) {
> +             flush_work(&batch_free->work);

Not sure, list_del before free?

> +             kfree(batch_free);
> +     }
> +
>       tlb_flush_mmu_free_batches(tlb->local.next, true);
>       tlb->local.next = NULL;
>  }

Reply via email to