* Jason Low <jason.l...@hp.com> wrote:

> While running a database workload on a 16 socket machine, there were
> scalability issues related to itimers. The following link contains a
> more detailed summary of the issues at the application level.
> 
> https://lkml.org/lkml/2015/8/26/737
> 
> Commit 1018016c706f addressed the issue with the thread_group_cputimer
> spinlock taking up a significant portion of total run time.
> This patch series addresses the secondary issue where a lot of time is
> spent trying to acquire the sighand lock. It was found in some cases
> that 200+ threads were simultaneously contending for the same sighand
> lock, reducing throughput by more than 30%.
> 
> With this patch set (along with commit 1018016c706f mentioned above),
> the performance hit of itimers almost completely goes away on the
> 16 socket system.
> 
> Jason Low (4):
>   timer: Optimize fastpath_timer_check()
>   timer: Check thread timers only when there are active thread timers
>   timer: Convert cputimer->running to bool
>   timer: Reduce unnecessary sighand lock contention
> 
>  include/linux/init_task.h      |    3 +-
>  include/linux/sched.h          |    9 ++++--
>  kernel/fork.c                  |    2 +-
>  kernel/time/posix-cpu-timers.c |   63 ++++++++++++++++++++++++++++-----------
>  4 files changed, 54 insertions(+), 23 deletions(-)

Is there some itimers benchmark that can be used to measure the effects of 
these 
changes?

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to