Hi, On 18.07.2017 15:02, Alexander Shishkin wrote: > Alexey Budankov <alexey.budan...@linux.intel.com> writes: > >> +static void >> +perf_event_groups_rotate(struct perf_event_groups *groups, int cpu) >> +{ >> + struct rb_node *node; >> + struct perf_event *node_event; >> + >> + WARN_ON_ONCE(!groups); > > This seems redundant.
Used that for debugging. > >> + >> + list_rotate_left(&groups->list); >> + >> + /* will replace rotation above in patch v5 3/4 >> + >> + node = groups->tree.rb_node; >> + >> + while (node) { >> + node_event = container_of(node, >> + struct perf_event, group_node); >> + >> + if (cpu < node_event->cpu) { >> + node = node->rb_left; >> + } else if (cpu > node_event->cpu) { >> + node = node->rb_right; >> + } else { >> + list_rotate_left(&node_event->group_list); >> + break; >> + } >> + } >> + >> + */ > > Please don't do this, it doesn't add clarity. Accepted. Put it here just for review process simplification. > >> +static int >> +perf_event_groups_iterate(struct perf_event_groups *groups, >> + perf_event_groups_iterate_f callback, void *data) >> +{ >> + int ret = 0; >> + struct perf_event *event; >> + >> + WARN_ON_ONCE(!groups); >> + >> + list_for_each_entry(event, &groups->list, group_list_entry) { >> + ret = callback(event, data); >> + if (ret) >> + break; >> + } >> + >> + /* will replace itration above in patch v5 4/4 >> + >> + for (node = rb_first(groups); node; node = rb_next(node)) { >> + node_event = container_of(node, struct perf_event, group_node); >> + list_for_each_entry(event, &node_event->group_list, >> + group_list_entry) { >> + WARN_ON_ONCE(!(event->cpu == node_event->cpu)); >> + ret = callback(event, data); >> + if (ret) { >> + return ret; >> + } >> + } >> + } >> + >> + */ > > Ditto. > > Regards, > -- > Alex > Thanks, Alexey