* Peter Zijlstra <pet...@infradead.org> wrote:

> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -42,6 +42,7 @@
>  #include <linux/module.h>
>  #include <linux/mman.h>
>  #include <linux/compat.h>
> +#include <linux/percpu-rwsem.h>
>  
>  #include "internal.h"
>  
> @@ -122,6 +123,42 @@ static int cpu_function_call(int cpu, int (*func) (void 
> *info), void *info)
>       return data.ret;
>  }
>  
> +/*
> + * Required to migrate events between contexts.
> + *
> + * Migrating events between contexts is rather tricky; there is no real
> + * serialization around the perf_event::ctx pointer.
> + *
> + * So what we do is hold this rwsem over the remove_from_context and
> + * install_in_context. The remove_from_context ensures the event is inactive
> + * and will not be used from IRQ/NMI context anymore, and the remaining
> + * sites can acquire the rwsem read side.
> + */
> +static struct percpu_rw_semaphore perf_rwsem;
> +
> +static inline struct perf_event_context *perf_event_ctx(struct perf_event 
> *event)
> +{
> +#ifdef CONFIG_LOCKDEP
> +     /*
> +      * Assert the locking rules outlined above; in order to dereference
> +      * event->ctx we must either be attached to the context or hold
> +      * perf_rwsem.
> +      *
> +      * XXX not usable from IPIs because the lockdep held lock context
> +      * will be wrong; maybe add trylock variants to the percpu_rw_semaphore
> +      */
> +     WARN_ON_ONCE(!(event->attach_state & PERF_ATTACH_CONTEXT) ||
> +                  (debug_locks && !lockdep_is_held(&perf_rwsem.rw_sem)));
> +#endif
> +
> +     return event->ctx;
> +}
> +
> +static inline struct perf_event_context *__perf_event_ctx(struct perf_event 
> *event)
> +{
> +     return event->ctx;
> +}


So if this approach is acceptable I'd also rename event->ctx to 
event->__ctx, to make sure it's not used accidentally without 
serialization in any old (or new) perf related patches.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to