On Fri, Nov 07, 2014 at 10:09:59AM +, Matt Fleming wrote:
> On Fri, 07 Nov, at 10:08:04AM, Peter Zijlstra wrote:
> >
> > How is that supposed to work? You call __intel_cqm_event_count() on the
> > one cpu per socket, but then you use a local_add, not an atomic_add,
> > even though these adds c
On Fri, 07 Nov, at 10:08:04AM, Peter Zijlstra wrote:
>
> How is that supposed to work? You call __intel_cqm_event_count() on the
> one cpu per socket, but then you use a local_add, not an atomic_add,
> even though these adds can happen concurrently as per IPI broadcast.
Ouch, right. That's broke
On Thu, Nov 06, 2014 at 12:23:20PM +, Matt Fleming wrote:
> +static void __intel_cqm_event_count(void *info)
> +{
> + struct perf_event *event = info;
> + u64 val;
> +
> + val = __rmid_read(event->hw.cqm_rmid);
> +
> + if (val & (RMID_VAL_ERROR | RMID_VAL_UNAVAIL))
> +
From: Matt Fleming
Add support for task events as well as system-wide events. This change
has a big impact on the way that we gather LLC occupancy values in
intel_cqm_event_read().
Currently, for system-wide (per-cpu) events we defer processing to
userspace which knows how to discard all but one
4 matches
Mail list logo