On 29 Nov 2004, at 14:47, Geoff Canyon wrote:

On Nov 29, 2004, at 3:56 AM, Richard Gaskin wrote:

Dave Cragg wrote:
On 29 Nov 2004, at 09:11, Richard Gaskin wrote:
Scott Rossi wrote:
Both of the above routines provide the same output. However, when viewing
the %CPU use on a Mac OSX system with the Activity Monitor, CPU usage is
clearly dependent on the frequency of the "send in" message: with 100
milliseconds frequency, the first handler runs at about 15% usage, and with
50 milliseconds frequency runs at about 30% usage (makes sense).
Amazingly, the "wait x with messages" handler runs at less than 1% usage.
And because "with messages" does not block other messages from being sent,
this seems a very efficient way to run a timer.
Obviously the above is useful only in situations requiring accuracy of 1
second or less, but at first glance I can't see any drawback to using this
method. Can you?


None that I can see, but I managed to get myself confused on the issue: if you only want a time sent once a second, why not just send it in 1 second rather than polling several times a second?

I guess Scott was concerned about the smoothness of the time display ticking over. If you send every 1 second, and there is something holding up message processing, the timer may be late to update. Increasing the frequency increases the chance of getting it right (but doesn't guarantee it).

Wouldn't any issues that would delay the firing of a one-second timer also delay a 1/10th second timer as well?

Anything that's going to stop a 1 second message is also going to stop a 1/10 second message as well. It's possible I suppose for send...in to take slightly longer than a second, so when a visible display is ticking over I generally use something like 55 ticks, which should (virtually) guarantee that it will tick over.

My brain's hurting thinking about this. But if you send in 55 ticks, there's a good chance you won't have hit the next second when the message is handled, and so the timer display won't update until the next time through. So you're likely to get a visibly uneven update.


If you send in 1 second, it's probably going a fraction more than I second between the "send" and the point where the display is updated. Eventually, you're going to see a 2 second jump in the display.


wait...with messages is perfectly fine to use. The one that hogs the system is when you wait for a system condition to change: wait until the mouse is up, repeat while the mouse is down, that sort of thing. Even then it's okay if you're actually _doing_ something. The problem enters in when you have something like



The "repeat until the mouse is down" problem I can understand. But I need more convincing about "wait".


wait until (x + y > 1000) with messages -- assume x & y are script locals or globals
wait until the mouse is up with messages
wait until myFunction() with messages -- assume myFunction returns true or false


Won't the frequency at which the conditional is evaluated be the same? If it gets evaluated as soon as nothing else is happening, then you'd expect all three to be processor intensive. But it doesn't seem to happen that way. On the other hand, I can't see what determines the frequency either.

Cheers
Dave


_______________________________________________ use-revolution mailing list [EMAIL PROTECTED] http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to