On Tue, Mar 05, 2013 at 02:34:25AM +0000, Tang, Feng wrote:
> Hi Jason,
> 
> Sorry, I forgot to add you in cc list in the first place. Please
> help to review the patch series, thanks!

Sure, I didn't get CC'd on the patches, so this is an imperfect reply,
but..

Did you consider an approach closer to the function I outlined to
Jonh:

// Drops some small precision along the way but is simple..
static inline u64 cyclecounter_cyc2ns_128(const struct cyclecounter *cc,
                                          cycle_t cycles)
{
    u64 max = U64_MAX/cc->mult;
    u64 num = cycles/max;
    u64 result = num * ((max * cc->mult) >> cc->shift);
    return result + cyclecounter_cyc2ns(cc, cycles - num*cc->mult);
}

Rather than the while loop, which, I suspect, drops more precision
that something like the above. (replace cyclecounter with clocksource)
At the very least, keeping it as a distinct inline will let someone
come by one day and implement a proper 128 bit multiply...

You may want to also CC the maintainers of all the ARM subsystems that
use read_persistent_clock and check with them to ensure this new
interface will let them migrate their implementations as well.

>           * Solve the problem of judging S3/S4, as the clocksource
>             counter will be reset after coming out S4.

Hrm, what if it wraps during suspend? This probably isn't a problem
for a 64 bit TSC though..

Is it impossible to track if S4 or S3 was entered in the clocksource?

Reagrds,
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to