On 26/03/07, Marc Lörner <[EMAIL PROTECTED]> wrote:
On Monday 26 March 2007 00:08, andrzej zaborowski wrote:
> On 26/03/07, andrzej zaborowski <[EMAIL PROTECTED]> wrote:
<snip>
> > > +# warning non-optimized CPU
> > > +#include <sys/time.h>
> > > +#include <time.h>
> > > +
> > > static inline int64_t cpu_get_real_ticks (void)
> > > {
> > > - static int64_t ticks = 0;
> > > - return ticks++;
> > > + struct timeval tv;
> > > + static int64_t i = 0;
> > > + int64_t j;
> > > +
> > > + gettimeofday(&tv, NULL);
> > > + do {
> > > + j = (tv.tv_sec * (uint64_t) 1000000) + tv.tv_usec;
> > > + } while (i == j);
> > > + i = j;
> > > + return j;
> >
> > Isn't this an infinite loop? gettimeofday() was left out of the loop.
> >
> > How about "return j + (ticks++)" instead of the loop? If I understand
> > correctly it may slow things down to below 1Hz.
>
> (I wanted to say MHz)
I dont think so, in the loop "j" is set and the while-condition is "j==i",
so unless "(tv.tv_sec * (uint64_t) 1000000) + tv.tv_usec" always computes the
same value the do-while block gets only executed once.
Well, it does always compute the same value - inside the loop. Doesn't
it? And across one microsecond also the same value in all calls to
cpu_get_real_ticks.
<snip>
Regards