On Wed, 2008-01-09 at 18:29 -0500, Steven Rostedt wrote:
> +cycle_t notrace get_monotonic_cycles(void)
> +{
> +       cycle_t cycle_now, cycle_delta, cycle_raw, cycle_last;
> +
> +       do {
> +               /*
> +                * cycle_raw and cycle_last can change on
> +                * another CPU and we need the delta calculation
> +                * of cycle_now and cycle_last happen atomic, as well
> +                * as the adding to cycle_raw. We don't need to grab
> +                * any locks, we just keep trying until get all the
> +                * calculations together in one state.
> +                *
> +                * In fact, we __cant__ grab any locks. This
> +                * function is called from the latency_tracer which can
> +                * be called anywhere. To grab any locks (including
> +                * seq_locks) we risk putting ourselves into a deadlock.
> +                */
> +               cycle_raw = clock->cycle_raw;
> +               cycle_last = clock->cycle_last;
> +
> +               /* read clocksource: */
> +               cycle_now = clocksource_read(clock);
> +
> +               /* calculate the delta since the last update_wall_time: */
> +               cycle_delta = (cycle_now - cycle_last) & clock->mask;
> +
> +       } while (cycle_raw != clock->cycle_raw ||
> +                cycle_last != clock->cycle_last);
> +
> +       return cycle_raw + cycle_delta;
> +}

The last I check this changed caused problems for me with the -rt
latency tracer.. I haven't tested this tree , but all things being equal
I would imagine the exists here also..

Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to