Hi,

I'm probably missing something obvious and you guys on Glass / Prism / Quantum 
can help set me straight. I was thinking tonight of a different way of 
initiating pulse events that would, I think, completely smooth out the pulses 
such that we don't end up with "drift" due to the timer being at a different 
rate than the GPU.

Suppose we have two variables in the system (and for simplicity lets talk about 
a single Scene, because one problem I think this idea has is with multiple 
scenes and I want to discuss that separately after the core mechanism is 
understood):

        - boolean pendingPulse
        - int runningAnimationCounter

Whenever an animation starts, the runningAnimationCounter is incremented. When 
an animation ends, it is decremented (or it could be a Set<Animation> or 
whatever). The pendingPulse is simply false to start with, and is checked 
before we submit another pulse. Whenever a node in the scene graph becomes 
dirty, or the scene is resized, or stylesheets are changed, or in any case 
something happens that requires us to draw again, we check this flag and fire a 
new pulse if one is not already pending.

When a pulse occurs, we process animations first, then CSS, then layout, then 
validate all the bounds, and *then we block* until the rendering thread is 
available for synchronization. I believe this is what we are doing today (it 
was a change Steve and I looked at with Jasper a couple months ago IIRC).

But now for the new part. Immediately after synchronization, we check the 
runningAnimationCounter. If it is > 0, then we fire off a new pulse and leave 
the pendingPulse flag set to true. If runningAnimationCounter == 0, then we 
flip pendingPulse to false. Other than the pick that always happens at the end 
of the pulse, we do nothing else new and, if the pick didn't cause state to 
change, we are now quiescent.

Meanwhile, the render thread has run off doing its thing. The last step of 
rendering is the present, where we will block until the thing is presented, 
which, when we return, would put us *immediately* at the start of the next 
16.66ms cycle. Since the render thread has just completed its duties, it goes 
back to waiting until the FX thread comes around asking to sync up again.

If there is an animation going on such that a new pulse had been fired 
immediately after synchronization, then that new pulse would have been handled 
while the previous frame was being rendered. Most likely, by the time the 
render thread completes presenting and comes back to check with the FX thread, 
it will find that the FX thread is already waiting for it with the next frames 
data. It will synchronize immediately and then carry on rendering another frame.

I think the way this would behave is that, when an animation is first played, 
you will get two pulses close to each other. The first pulse will do its 
business and then synchronize and then immediately fire off another pulse. That 
next pulse will then also get processed and then the FX thread will block until 
the previous frame finishes rendering. During this time, additional events 
(either application generated via runLater calls happening on background 
threads, or from OS events) will get queued up. Between pulse #2 and pulse #3 
then a bunch of other events will get processed, essentially playing catch-up. 
My guess is that this won't be a problem but you might see a hiccup at the 
start of a new animation if the event queue is too full and it can't process 
all that stuff in 16ms (because at this point we're really multi-theaded 
between the FX and render threads and have nearly 16ms for each thread to do 
their business, instead of only 8ms which is what you'd have in a single 
threaded system).

Another question I have is around resize events and how those work. If they 
also come in to glass on the FX thread (but at a higher priority than user 
events like a pulse or other input events?) then what will happen is that we 
will get a resize event and process a half-a-pulse (or maybe a whole pulse? 
animations+css+layout or just css+layout?) and then render, pretty much just as 
fast as we can.

As for multiple scenes, I'm actually curious how this happens today. If I have 
2 scenes, and we have just a single render thread servicing both, then when I 
go to present, it blocks? Or is there a non-blocking present method that we use 
instead? Because if we block, then having 2 scenes would cut you down to 30fps 
maximum, wouldn't it? If we are non-blocking today (is that possible?) then the 
only way this proposed solution would work is if there was a different render 
thread per stage (which actually is something I think we ought to be doing 
anyway?).

Thanks
Richard


Reply via email to