On Fri, Jan 23, 2015 at 03:45:55PM -0800, Jason Low wrote:
> On a side note, if we just move the cputimer->running = 1 to after the
> call to update_gt_cputime in thread_group_cputimer(), then we don't have
> to worry about concurrent adds occuring in this function?
Yeah, maybe.. There are a few r
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
> On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> > On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > > +static void update_gt_cputime(struct thread_gr
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
> On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> > On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > > +static void update_gt_cputime(struct thread_gr
On Fri, Jan 23, 2015 at 10:07:31AM -0800, Jason Low wrote:
> On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
> > > + .running = ATOMIC_INIT(0), \
> > > + atomic_t running;
> > > + atomic_set(&sig->cputimer.running, 1);
> > > @@ -174,7 +174,7 @@
On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> > > task_cputime *b)
> > > {
> > > +
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> > task_cputime *b)
> > {
> > + if (b->utime > atomic64_read(&a->utime))
> > + atomic64_set
On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
> > + .running = ATOMIC_INIT(0), \
> > + atomic_t running;
> > + atomic_set(&sig->cputimer.running, 1);
> > @@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
> > *ts
> + .running = ATOMIC_INIT(0), \
> + atomic_t running;
> + atomic_set(&sig->cputimer.running, 1);
> @@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
> *tsk)
> + if (!atomic_read(&cputimer->running))
> + if (
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> task_cputime *b)
> {
> + if (b->utime > atomic64_read(&a->utime))
> + atomic64_set(&a->utime, b->utime);
>
> + if (b->stime > atomic64_read(&
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> When running a database workload, we found a scalability issue
> with itimers.
>
> Much of the problem was caused by the thread_group_cputimer spinlock.
> Each time we account for group system/user time, we need to obtain a
> thread_grou
10 matches
Mail list logo