Chris Cannam wrote:
There are two ways we can know if something has changed during recording. One
is that the pointer has moved (because we need to continually extend the
segment block as we record). The other is that something in the recording
segment itself has changed.
At one point previously we talked about continually updating the end marker
time in the recorded segment to provoke an update. I was opposed to that,
because that's not what the end marker is supposed to mean (it's intended for
reducing the apparent duration of a segment, not extending it, and it's only
supposed to exist if the user has overridden the default).
I've tried letting the model know of the pointer position time-wise, so
it can be used to compute the end of any recording segment. This isn't
committed at the moment as it doesn't seem to work yet :-).
However, for audio segments the end marker effectively _does_ change
continually during recording, because by default the end marker is inherited
from the end time and the end time is calculated from the audio end time (in
RealTime units) and that is updated while we record
(rosegardenguidoc.cpp:2221). All that's missing is a call to
Segment::notifyEndMarkerChange when that value changes on an audio segment,
and the canvas will update itself with no further code needed. So, I've
committed that.
That gives us audio recording segments. It does seem to cost more in CPU than
I would have hoped -- do we have more updating than just the single segment
that's changed?
Yes, it seems so, as I said we end up updating the whole screen (try
tracing the computed refreshed rect in
CompositionView::viewportPaintEvent). As previously, the problem is lays
less in when to trigger the update than what to update. I tried adding a
'if (recording) segmentDrawBufferNeedsRefresh()' in
CompositionView::setPointer(), and that will put us back in CPU-hogging
mode just the same.
For MIDI segments, we could still potentially update the end marker while
recording, comment it as an internal trick, and remove the end marker when
recording finishes. Or we could continually update the rests in the segment
as we record (which would also be handy for some future
update-a-notation-editor-as-we-record mechanism, such as I implemented about
two years ago but never completed satisfactorily) thus updating the actual
end time rather than the end marker. Either way is beginning to seem far
more satisfactory than wildly refreshing everything available whenever the
position pointer moves (during recording anyway).
I'd favor a solution where the recording segments are updated in a way
or another, because they can signal the model which can then compute
which rect needs to be updated - even if this computation is broken at
the moment, because we rely on the internal refresh rectangle
computation of the widget, while we should maintain our own specifically
for the segment draw buffer.
However, it seems pretty daft to me to update N segments with
essentially the same information, which will all trigger a screen
update, etc... while the info we need is simply the pointer position.
commit a fix in a moment -- that's immaterial to this though). But in this
case the requests have different notify callbacks because they come from
separately constructed AudioPreviewUpdater objects, so the thread has no way
to know that one is intended to supersede the other. The composition model
is the only thing that actually knows, I think.
I agree then, it's probably the model which should sort the audio
preview requests.
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Rosegarden-devel mailing list
Rosegarden-devel@lists.sourceforge.net - use the link below to unsubscribe
https://lists.sourceforge.net/lists/listinfo/rosegarden-devel