Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-22 Thread Shivappa Vikas




Besides there's another bug that we retry rotating without resetting
nr_needed and start in __intel_cqm_rmid_rotate().

Those bugs combined together led to the following oops.

WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
__put_rmid+0x28/0x80()
...
 [] __put_rmid+0x28/0x80
 [] intel_cqm_rmid_rotate+0xba/0x440
 [] process_one_work+0x17b/0x470
 [] worker_thread+0x11b/0x400
...
BUG: unable to handle kernel NULL pointer dereference at   (null)


Yes, I saw these issues in the RMID rotation and also that it even rotated the 
rmids which were in use before the limbo ones which did not crash but it just 
made it output wierd data. Like David mentioned thats why we started 
writing the new patches.



...
 [] intel_cqm_rmid_rotate+0xba/0x440
 [] process_one_work+0x17b/0x470
 [] worker_thread+0x11b/0x400


I've managed to forgot most if not all of that horror show. Vikas and
David seem to be working on a replacement, but until such a time it
would be good if this thing would not crash the kernel.

Guys, could you have a look? To me it appears to mostly have the right
shape, but like I said, I forgot most details...


The new patch is on the way after Thomas agreed for the requirements we sent a 
few weeks back. Expect to send it in a week or so.. This fix seems fine.


Thanks,
Vikas



Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-22 Thread Shivappa Vikas




Besides there's another bug that we retry rotating without resetting
nr_needed and start in __intel_cqm_rmid_rotate().

Those bugs combined together led to the following oops.

WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
__put_rmid+0x28/0x80()
...
 [] __put_rmid+0x28/0x80
 [] intel_cqm_rmid_rotate+0xba/0x440
 [] process_one_work+0x17b/0x470
 [] worker_thread+0x11b/0x400
...
BUG: unable to handle kernel NULL pointer dereference at   (null)


Yes, I saw these issues in the RMID rotation and also that it even rotated the 
rmids which were in use before the limbo ones which did not crash but it just 
made it output wierd data. Like David mentioned thats why we started 
writing the new patches.



...
 [] intel_cqm_rmid_rotate+0xba/0x440
 [] process_one_work+0x17b/0x470
 [] worker_thread+0x11b/0x400


I've managed to forgot most if not all of that horror show. Vikas and
David seem to be working on a replacement, but until such a time it
would be good if this thing would not crash the kernel.

Guys, could you have a look? To me it appears to mostly have the right
shape, but like I said, I forgot most details...


The new patch is on the way after Thomas agreed for the requirements we sent a 
few weeks back. Expect to send it in a week or so.. This fix seems fine.


Thanks,
Vikas



Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-17 Thread David Carrillo-Cisneros
On Tue, May 16, 2017 at 7:38 AM, Peter Zijlstra  wrote:
> On Thu, May 04, 2017 at 10:31:43AM +0800, Zefan Li wrote:
>> It is assumed that the head of cache_groups always has valid RMID,
>> which isn't true.
>>
>> When we deallocate RMID from conflicting events currently we don't
>> move them to the tail, and one of those events can happen to be in
>> the head. Another case is we allocate RMIDs for all the events except
>> the head event in intel_cqm_sched_in_event().
>>
>> Besides there's another bug that we retry rotating without resetting
>> nr_needed and start in __intel_cqm_rmid_rotate().
>>
>> Those bugs combined together led to the following oops.
>>
>> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
>> __put_rmid+0x28/0x80()
>> ...
>>  [] __put_rmid+0x28/0x80
>>  [] intel_cqm_rmid_rotate+0xba/0x440
>>  [] process_one_work+0x17b/0x470
>>  [] worker_thread+0x11b/0x400
>> ...
>> BUG: unable to handle kernel NULL pointer dereference at   (null)

I ran into this bug long time ago but never found an easy way to
reproduce. Do you have one?

>> ...
>>  [] intel_cqm_rmid_rotate+0xba/0x440
>>  [] process_one_work+0x17b/0x470
>>  [] worker_thread+0x11b/0x400
>
> I've managed to forgot most if not all of that horror show. Vikas and
> David seem to be working on a replacement, but until such a time it
> would be good if this thing would not crash the kernel.
>
> Guys, could you have a look? To me it appears to mostly have the right
> shape, but like I said, I forgot most details...

The patch LGTM. I ran into this issues before and fixed them in a
similar but messier way, then the re-write started ...

>
>>
>> Cc: sta...@vger.kernel.org
>> Signed-off-by: Zefan Li 
Acked-by: David Carrillo-Cisneros 


Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-17 Thread David Carrillo-Cisneros
On Tue, May 16, 2017 at 7:38 AM, Peter Zijlstra  wrote:
> On Thu, May 04, 2017 at 10:31:43AM +0800, Zefan Li wrote:
>> It is assumed that the head of cache_groups always has valid RMID,
>> which isn't true.
>>
>> When we deallocate RMID from conflicting events currently we don't
>> move them to the tail, and one of those events can happen to be in
>> the head. Another case is we allocate RMIDs for all the events except
>> the head event in intel_cqm_sched_in_event().
>>
>> Besides there's another bug that we retry rotating without resetting
>> nr_needed and start in __intel_cqm_rmid_rotate().
>>
>> Those bugs combined together led to the following oops.
>>
>> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
>> __put_rmid+0x28/0x80()
>> ...
>>  [] __put_rmid+0x28/0x80
>>  [] intel_cqm_rmid_rotate+0xba/0x440
>>  [] process_one_work+0x17b/0x470
>>  [] worker_thread+0x11b/0x400
>> ...
>> BUG: unable to handle kernel NULL pointer dereference at   (null)

I ran into this bug long time ago but never found an easy way to
reproduce. Do you have one?

>> ...
>>  [] intel_cqm_rmid_rotate+0xba/0x440
>>  [] process_one_work+0x17b/0x470
>>  [] worker_thread+0x11b/0x400
>
> I've managed to forgot most if not all of that horror show. Vikas and
> David seem to be working on a replacement, but until such a time it
> would be good if this thing would not crash the kernel.
>
> Guys, could you have a look? To me it appears to mostly have the right
> shape, but like I said, I forgot most details...

The patch LGTM. I ran into this issues before and fixed them in a
similar but messier way, then the re-write started ...

>
>>
>> Cc: sta...@vger.kernel.org
>> Signed-off-by: Zefan Li 
Acked-by: David Carrillo-Cisneros 


Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-16 Thread Peter Zijlstra
On Thu, May 04, 2017 at 10:31:43AM +0800, Zefan Li wrote:
> It is assumed that the head of cache_groups always has valid RMID,
> which isn't true.
> 
> When we deallocate RMID from conflicting events currently we don't
> move them to the tail, and one of those events can happen to be in
> the head. Another case is we allocate RMIDs for all the events except
> the head event in intel_cqm_sched_in_event().
> 
> Besides there's another bug that we retry rotating without resetting
> nr_needed and start in __intel_cqm_rmid_rotate().
> 
> Those bugs combined together led to the following oops.
> 
> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
> __put_rmid+0x28/0x80()
> ...
>  [] __put_rmid+0x28/0x80
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> ...
> BUG: unable to handle kernel NULL pointer dereference at   (null)
> ...
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400

I've managed to forgot most if not all of that horror show. Vikas and
David seem to be working on a replacement, but until such a time it
would be good if this thing would not crash the kernel.

Guys, could you have a look? To me it appears to mostly have the right
shape, but like I said, I forgot most details...

> 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Zefan Li 
> ---
>  arch/x86/events/intel/cqm.c | 19 ---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
> index 8c00dc0..c06a5ba 100644
> --- a/arch/x86/events/intel/cqm.c
> +++ b/arch/x86/events/intel/cqm.c
> @@ -553,8 +553,13 @@ static bool intel_cqm_sched_in_event(u32 rmid)
>  
>   leader = list_first_entry(_groups, struct perf_event,
> hw.cqm_groups_entry);
> - event = leader;
>  
> + if (!list_empty(_groups) && !__rmid_valid(leader->hw.cqm_rmid)) {
> + intel_cqm_xchg_rmid(leader, rmid);
> + return true;
> + }
> +
> + event = leader;
>   list_for_each_entry_continue(event, _groups,
>hw.cqm_groups_entry) {
>   if (__rmid_valid(event->hw.cqm_rmid))
> @@ -721,6 +726,7 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>  {
>   struct perf_event *group, *g;
>   u32 rmid;
> + LIST_HEAD(conflicting_groups);
>  
>   lockdep_assert_held(_mutex);
>  
> @@ -744,7 +750,11 @@ static void 
> intel_cqm_sched_out_conflicting_events(struct perf_event *event)
>  
>   intel_cqm_xchg_rmid(group, INVALID_RMID);
>   __put_rmid(rmid);
> + list_move_tail(>hw.cqm_groups_entry,
> +_groups);
>   }
> +
> + list_splice_tail(_groups, _groups);
>  }
>  
>  /*
> @@ -773,9 +783,9 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>   */
>  static bool __intel_cqm_rmid_rotate(void)
>  {
> - struct perf_event *group, *start = NULL;
> + struct perf_event *group, *start;
>   unsigned int threshold_limit;
> - unsigned int nr_needed = 0;
> + unsigned int nr_needed;
>   unsigned int nr_available;
>   bool rotated = false;
>  
> @@ -789,6 +799,9 @@ static bool __intel_cqm_rmid_rotate(void)
>   if (list_empty(_groups) && list_empty(_rmid_limbo_lru))
>   goto out;
>  
> + nr_needed = 0;
> + start = NULL;
> +
>   list_for_each_entry(group, _groups, hw.cqm_groups_entry) {
>   if (!__rmid_valid(group->hw.cqm_rmid)) {
>   if (!start)
> -- 
> 1.8.2.2
> 


Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-16 Thread Peter Zijlstra
On Thu, May 04, 2017 at 10:31:43AM +0800, Zefan Li wrote:
> It is assumed that the head of cache_groups always has valid RMID,
> which isn't true.
> 
> When we deallocate RMID from conflicting events currently we don't
> move them to the tail, and one of those events can happen to be in
> the head. Another case is we allocate RMIDs for all the events except
> the head event in intel_cqm_sched_in_event().
> 
> Besides there's another bug that we retry rotating without resetting
> nr_needed and start in __intel_cqm_rmid_rotate().
> 
> Those bugs combined together led to the following oops.
> 
> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
> __put_rmid+0x28/0x80()
> ...
>  [] __put_rmid+0x28/0x80
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> ...
> BUG: unable to handle kernel NULL pointer dereference at   (null)
> ...
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400

I've managed to forgot most if not all of that horror show. Vikas and
David seem to be working on a replacement, but until such a time it
would be good if this thing would not crash the kernel.

Guys, could you have a look? To me it appears to mostly have the right
shape, but like I said, I forgot most details...

> 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Zefan Li 
> ---
>  arch/x86/events/intel/cqm.c | 19 ---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
> index 8c00dc0..c06a5ba 100644
> --- a/arch/x86/events/intel/cqm.c
> +++ b/arch/x86/events/intel/cqm.c
> @@ -553,8 +553,13 @@ static bool intel_cqm_sched_in_event(u32 rmid)
>  
>   leader = list_first_entry(_groups, struct perf_event,
> hw.cqm_groups_entry);
> - event = leader;
>  
> + if (!list_empty(_groups) && !__rmid_valid(leader->hw.cqm_rmid)) {
> + intel_cqm_xchg_rmid(leader, rmid);
> + return true;
> + }
> +
> + event = leader;
>   list_for_each_entry_continue(event, _groups,
>hw.cqm_groups_entry) {
>   if (__rmid_valid(event->hw.cqm_rmid))
> @@ -721,6 +726,7 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>  {
>   struct perf_event *group, *g;
>   u32 rmid;
> + LIST_HEAD(conflicting_groups);
>  
>   lockdep_assert_held(_mutex);
>  
> @@ -744,7 +750,11 @@ static void 
> intel_cqm_sched_out_conflicting_events(struct perf_event *event)
>  
>   intel_cqm_xchg_rmid(group, INVALID_RMID);
>   __put_rmid(rmid);
> + list_move_tail(>hw.cqm_groups_entry,
> +_groups);
>   }
> +
> + list_splice_tail(_groups, _groups);
>  }
>  
>  /*
> @@ -773,9 +783,9 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>   */
>  static bool __intel_cqm_rmid_rotate(void)
>  {
> - struct perf_event *group, *start = NULL;
> + struct perf_event *group, *start;
>   unsigned int threshold_limit;
> - unsigned int nr_needed = 0;
> + unsigned int nr_needed;
>   unsigned int nr_available;
>   bool rotated = false;
>  
> @@ -789,6 +799,9 @@ static bool __intel_cqm_rmid_rotate(void)
>   if (list_empty(_groups) && list_empty(_rmid_limbo_lru))
>   goto out;
>  
> + nr_needed = 0;
> + start = NULL;
> +
>   list_for_each_entry(group, _groups, hw.cqm_groups_entry) {
>   if (!__rmid_valid(group->hw.cqm_rmid)) {
>   if (!start)
> -- 
> 1.8.2.2
> 


Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-15 Thread Zefan Li
any comments?

On 2017/5/4 10:31, Zefan Li wrote:
> It is assumed that the head of cache_groups always has valid RMID,
> which isn't true.
> 
> When we deallocate RMID from conflicting events currently we don't
> move them to the tail, and one of those events can happen to be in
> the head. Another case is we allocate RMIDs for all the events except
> the head event in intel_cqm_sched_in_event().
> 
> Besides there's another bug that we retry rotating without resetting
> nr_needed and start in __intel_cqm_rmid_rotate().
> 
> Those bugs combined together led to the following oops.
> 
> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
> __put_rmid+0x28/0x80()
> ...
>  [] __put_rmid+0x28/0x80
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> ...
> BUG: unable to handle kernel NULL pointer dereference at   (null)
> ...
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Zefan Li 
> ---
>  arch/x86/events/intel/cqm.c | 19 ---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
> index 8c00dc0..c06a5ba 100644
> --- a/arch/x86/events/intel/cqm.c
> +++ b/arch/x86/events/intel/cqm.c
> @@ -553,8 +553,13 @@ static bool intel_cqm_sched_in_event(u32 rmid)
>  
>   leader = list_first_entry(_groups, struct perf_event,
> hw.cqm_groups_entry);
> - event = leader;
>  
> + if (!list_empty(_groups) && !__rmid_valid(leader->hw.cqm_rmid)) {
> + intel_cqm_xchg_rmid(leader, rmid);
> + return true;
> + }
> +
> + event = leader;
>   list_for_each_entry_continue(event, _groups,
>hw.cqm_groups_entry) {
>   if (__rmid_valid(event->hw.cqm_rmid))
> @@ -721,6 +726,7 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>  {
>   struct perf_event *group, *g;
>   u32 rmid;
> + LIST_HEAD(conflicting_groups);
>  
>   lockdep_assert_held(_mutex);
>  
> @@ -744,7 +750,11 @@ static void 
> intel_cqm_sched_out_conflicting_events(struct perf_event *event)
>  
>   intel_cqm_xchg_rmid(group, INVALID_RMID);
>   __put_rmid(rmid);
> + list_move_tail(>hw.cqm_groups_entry,
> +_groups);
>   }
> +
> + list_splice_tail(_groups, _groups);
>  }
>  
>  /*
> @@ -773,9 +783,9 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>   */
>  static bool __intel_cqm_rmid_rotate(void)
>  {
> - struct perf_event *group, *start = NULL;
> + struct perf_event *group, *start;
>   unsigned int threshold_limit;
> - unsigned int nr_needed = 0;
> + unsigned int nr_needed;
>   unsigned int nr_available;
>   bool rotated = false;
>  
> @@ -789,6 +799,9 @@ static bool __intel_cqm_rmid_rotate(void)
>   if (list_empty(_groups) && list_empty(_rmid_limbo_lru))
>   goto out;
>  
> + nr_needed = 0;
> + start = NULL;
> +
>   list_for_each_entry(group, _groups, hw.cqm_groups_entry) {
>   if (!__rmid_valid(group->hw.cqm_rmid)) {
>   if (!start)
> 



Re: [PATCH] perf/x86/intel/cqm: Make sure the head event of cache_groups always has valid RMID

2017-05-15 Thread Zefan Li
any comments?

On 2017/5/4 10:31, Zefan Li wrote:
> It is assumed that the head of cache_groups always has valid RMID,
> which isn't true.
> 
> When we deallocate RMID from conflicting events currently we don't
> move them to the tail, and one of those events can happen to be in
> the head. Another case is we allocate RMIDs for all the events except
> the head event in intel_cqm_sched_in_event().
> 
> Besides there's another bug that we retry rotating without resetting
> nr_needed and start in __intel_cqm_rmid_rotate().
> 
> Those bugs combined together led to the following oops.
> 
> WARNING: at arch/x86/kernel/cpu/perf_event_intel_cqm.c:186 
> __put_rmid+0x28/0x80()
> ...
>  [] __put_rmid+0x28/0x80
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> ...
> BUG: unable to handle kernel NULL pointer dereference at   (null)
> ...
>  [] intel_cqm_rmid_rotate+0xba/0x440
>  [] process_one_work+0x17b/0x470
>  [] worker_thread+0x11b/0x400
> 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Zefan Li 
> ---
>  arch/x86/events/intel/cqm.c | 19 ---
>  1 file changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
> index 8c00dc0..c06a5ba 100644
> --- a/arch/x86/events/intel/cqm.c
> +++ b/arch/x86/events/intel/cqm.c
> @@ -553,8 +553,13 @@ static bool intel_cqm_sched_in_event(u32 rmid)
>  
>   leader = list_first_entry(_groups, struct perf_event,
> hw.cqm_groups_entry);
> - event = leader;
>  
> + if (!list_empty(_groups) && !__rmid_valid(leader->hw.cqm_rmid)) {
> + intel_cqm_xchg_rmid(leader, rmid);
> + return true;
> + }
> +
> + event = leader;
>   list_for_each_entry_continue(event, _groups,
>hw.cqm_groups_entry) {
>   if (__rmid_valid(event->hw.cqm_rmid))
> @@ -721,6 +726,7 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>  {
>   struct perf_event *group, *g;
>   u32 rmid;
> + LIST_HEAD(conflicting_groups);
>  
>   lockdep_assert_held(_mutex);
>  
> @@ -744,7 +750,11 @@ static void 
> intel_cqm_sched_out_conflicting_events(struct perf_event *event)
>  
>   intel_cqm_xchg_rmid(group, INVALID_RMID);
>   __put_rmid(rmid);
> + list_move_tail(>hw.cqm_groups_entry,
> +_groups);
>   }
> +
> + list_splice_tail(_groups, _groups);
>  }
>  
>  /*
> @@ -773,9 +783,9 @@ static void intel_cqm_sched_out_conflicting_events(struct 
> perf_event *event)
>   */
>  static bool __intel_cqm_rmid_rotate(void)
>  {
> - struct perf_event *group, *start = NULL;
> + struct perf_event *group, *start;
>   unsigned int threshold_limit;
> - unsigned int nr_needed = 0;
> + unsigned int nr_needed;
>   unsigned int nr_available;
>   bool rotated = false;
>  
> @@ -789,6 +799,9 @@ static bool __intel_cqm_rmid_rotate(void)
>   if (list_empty(_groups) && list_empty(_rmid_limbo_lru))
>   goto out;
>  
> + nr_needed = 0;
> + start = NULL;
> +
>   list_for_each_entry(group, _groups, hw.cqm_groups_entry) {
>   if (!__rmid_valid(group->hw.cqm_rmid)) {
>   if (!start)
>