On Thu, 15 Jan 2015 16:40:32 +0900
Joonsoo Kim <iamjoonsoo....@lge.com> wrote:

[...]
> 
> I saw roughly 5% win in a fast-path loop over kmem_cache_alloc/free
> in CONFIG_PREEMPT. (14.821 ns -> 14.049 ns)
> 
> Below is the result of Christoph's slab_test reported by
> Jesper Dangaard Brouer.
>
[...]

Acked-by: Jesper Dangaard Brouer <bro...@redhat.com>

> Acked-by: Christoph Lameter <c...@linux.com>
> Tested-by: Jesper Dangaard Brouer <bro...@redhat.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>
> ---
>  mm/slub.c |   35 +++++++++++++++++++++++------------
>  1 file changed, 23 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index fe376fe..ceee1d7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2398,13 +2398,24 @@ redo:
[...]
>        */
> -     preempt_disable();
> -     c = this_cpu_ptr(s->cpu_slab);
> +     do {
> +             tid = this_cpu_read(s->cpu_slab->tid);
> +             c = this_cpu_ptr(s->cpu_slab);
> +     } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
> +
> +     /*
> +      * Irqless object alloc/free alogorithm used here depends on sequence

Spelling of algorithm contains a typo ^^ 

> +      * of fetching cpu_slab's data. tid should be fetched before anything
> +      * on c to guarantee that object and page associated with previous tid
> +      * won't be used with current tid. If we fetch tid first, object and
> +      * page could be one associated with next tid and our alloc/free
> +      * request will be failed. In this case, we will retry. So, no problem.
> +      */
> +     barrier();

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to