"Jeff Garland" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > > > I think this is a good addition, but we should probably make the > > > addition for all Win32 compilers since I think this is actually > > > part of the Win32 api. > > > > > > > I agree with that. Would it be better to make it a millisec_clock, or > > just use the microsec_clock but the resolution is only milliseconds? > > Hmm, I'm thinking that for consistency it would probably be better to > call it millisec_clock.
Could be. I might be a bit off here (coming in late into the discussion), but I'd prefer consistency in my code; using microsec_clock for both Windows and Unix code - even if the real 'resolution' is dependent of the system time updates on the Win platforms. If you plan to timestamp events with low overhead, the easiest and fastest way to get the system time is GetSystemTimeAsFileTime (assuming you can defer the conversion from FILETIME to SYSTEMTIME until later). Just remember that you'll never (?) get more frequent updates of the system time than 10 or 15 (SMP system) milliseconds. Even though it is possible to affect the possible Sleep() resolution to get ~1 millisecond sleeps by using NtSetTimerResolution or multimedia timers, this doesn't seem to affect the system updates (I've never seen it update more often than the standard 10/15 ms even though I've tried). Anyone else got comments on that? I've got no experience on non-Intel or 64-bit Windows though. // Johan _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost