On Tue, Aug 25, 2015 at 11:56:38AM +0200, Ingo Molnar wrote:
> +void get_vmalloc_info(struct vmalloc_info *vmi)
> +{
> +     unsigned int cache_gen, gen;

I see you dropped the u64 thing, good, ignore that previous email.

> +
> +     /*
> +      * The common case is that the cache is valid, so first
> +      * read it, then check its validity.
> +      *
> +      * The two read barriers make sure that we read
> +      * 'cache_gen', 'vmap_info_cache' and 'gen' in
> +      * precisely that order:
> +      */
> +     cache_gen = vmap_info_cache_gen;
> +     smp_rmb();
> +     *vmi = vmap_info_cache;
> +     smp_rmb();
> +     gen = vmap_info_gen;
> +
> +     /*
> +      * If the generation counter of the cache matches that of
> +      * the vmalloc generation counter then return the cache:
> +      */
> +     if (cache_gen == gen)
> +             return;

There is one aspect of READ_ONCE() that is not replaced with smp_rmb(),
and that is that READ_ONCE() should avoid split loads.

Without READ_ONCE() the compiler is free to turn the loads into separate
and out of order byte loads just because its insane, thereby also making
the WRITE_ONCE() pointless.

Now I'm fairly sure it all doesn't matter much, the info can change the
moment we've copied it, so the read is inherently racy.

And by that same argument I feel this is all somewhat over engineered.

> +
> +     /* Make sure 'gen' is read before the vmalloc info: */
> +     smp_rmb();
> +     calc_vmalloc_info(vmi);
> +
> +     /*
> +      * All updates to vmap_info_cache_gen go through this spinlock,
> +      * so when the cache got invalidated, we'll only mark it valid
> +      * again if we first fully write the new vmap_info_cache.
> +      *
> +      * This ensures that partial results won't be used and that the
> +      * vmalloc info belonging to the freshest update is used:
> +      */
> +     spin_lock(&vmap_info_lock);
> +     if ((int)(gen-vmap_info_cache_gen) > 0) {
> +             vmap_info_cache = *vmi;
> +             /*
> +              * Make sure the new cached data is visible before
> +              * the generation counter update:
> +              */
> +             smp_wmb();
> +             vmap_info_cache_gen = gen;
> +     }
> +     spin_unlock(&vmap_info_lock);
> +}
> +
> +#endif /* CONFIG_PROC_FS */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to