Stefan,

--On 25 July 2013 11:19:29 +0200 Stefan Hajnoczi <stefa...@gmail.com> wrote:

On Sat, Jul 20, 2013 at 07:06:38PM +0100, Alex Bligh wrote:
Add utility functions to qemu-timer.c for nanosecond timing.

Ensure we keep track of all QEMUClocks on a list.

Add qemu_clock_deadline_ns and qemu_clock_deadline_all_ns to
calculate deadlines to nanosecond accuracy.

Add utility function qemu_soonest_timeout to calculate soonest deadline.

Add qemu_timeout_ns_to_ms to convert a timeout in nanoseconds back to
milliseconds for when ppoll is not used.

Please split this into smaller patches.  There are several logical
changes happening here.

OK

@@ -61,6 +71,15 @@ int64_t cpu_get_ticks(void);
 void cpu_enable_ticks(void);
 void cpu_disable_ticks(void);

+static inline int64_t qemu_soonest_timeout(int64_t timeout1, int64_t
timeout2) +{
+    /* we can abuse the fact that -1 (which means infinite) is a maximal
+     * value when cast to unsigned. As this is disgusting, it's kept in
+     * one inline function.
+     */
+    return ((uint64_t) timeout1 < (uint64_t) timeout2) ? timeout1 :
timeout2;

The straightforward version isn't much longer than the commented casting
trick:

if (timeout1 == -1) {
    return timeout2;
} else if (timeout2 == -1) {
    return timeout1;
} else {
    return timeout1 < timeout2 ? timeout1 : timeout2;
}

Well, it should be (timeout1 < 0) for consistency. It may be a micro
optimisation but I'm pretty sure the casting trick will produce better
code. With the comment, it's arguably more readable too.

@@ -48,6 +44,8 @@ struct QEMUClock {

     int type;
     bool enabled;
+
+    QLIST_ENTRY(QEMUClock) list;

I don't think global state is necessary.  The
run_timers()/clock_deadline() users have QEMUClock references, they can
just call per-QEMUClock functions instead of requiring qemu-timers.c to
keep a list.

This way AioContext clocks are safe to use from threads.

I don't think that's true. The thing that uses the list is
qemu_run_all_timers which doesn't have a list of all *clocks*
(as opposed to timers).

I had assumed (possibly wrongly) that AioContexts and clocks were
only created / deleted from the main thread. This is certainly true
currently as there is only one AioContext (as far as I can see - perhaps
I'm wrong here) and the other clocks are created on init.

If we need to support locking here I think we probably must.

+void qemu_free_clock(QEMUClock *clock)
+{
+    QLIST_REMOVE(clock, list);
+    g_free(clock);

assert that there are no timers left?

Yes I wasn't quite sure of the right semantics here as no clocks are
currently ever destroyed. I'm not quite sure how we know all timers
are destroyed when an AioContext is destroyed. Should I go and manually
free them or assert the right way?

--
Alex Bligh

Reply via email to