> On 15 Mar 2015, at 7:22 pm, Patrick J. Collins 
> <patr...@collinatorstudios.com> wrote:
> 
> This NSView is of an audio waveform.  I currently have my drawRect:
> method draw the lines of my waveform,

Have you designed it so that it only draws the minimum it needs to? For 
example, if your audio waveform represents 5 minutes of audio, but your view is 
only showing a 10-second chunk, you need a way to examine only that 10 seconds 
and draw that. Once you have that, you can refine it further - when a part of 
the view needs repainting, that maps to a certain time range (or ranges) of the 
audio, so you need only draw that small range. Minimising drawing is essential 
for performance.

> but as soon as I added a timer
> loop to move my playhead over the waveform as it plays, I found that
> this was not the right approach, as everytime the playhead moved, the
> parent (waveform view) would have to redraw itself, and this result in
> the playhead moving in an extremely choppy fashion.

Exactly. The "play head" position represents a single sample (or perhaps a very 
small range) so your view must have a way to limit drawing to a that range 
based on the dirty rects of the view. When the playhead moves it dirties a very 
small area of the view, so if you redraw the entire view it will always be slow 
and choppy.

Also, the use of a timer here seems weird. Wouldn't an audio player have a 
natural concept of the "position" of where the playback was in a buffer of 
samples? That then translates to a time value which then maps to a position 
within the view. As the audio plays the position is updated at intervals - no 
timer (in the sense of an NSTimer) is involved. Of course, a typical audio 
player is updating that sample pointer 44,100 times per second which is far, 
far faster than your view could ever keep up with no matter how optimal. 60fps 
is your goal, so you *could* run a timer at 60Hz and use that to poll where the 
audio player has got to and update the playhead. Or use the sample rate of the 
player and divide it down to about 60fps.

> 
> My next approach was to save my drawn waveform to an NSImage and use
> that as a background for my view...  If you have a better suggestion for
> how I could handle this, I'd love to hear it.

This seems a good idea - it will give better performance than directly 
redrawing the waveform, because it's just bit blitting. But you'd still want to 
limit that to strictly the areas requiring updating, not just blast the whole 
image into the view every time. While the current clipping will exclude parts 
that aren't needed at a low level, it still scans over all the pixels. Using 
the dirty rects to pick out small parts of that image will improve performance 
dramatically.

The key to this is using -[NSView getRectsBeingDrawn:count:] but somewhere you 
need to go from view position <-> audio time and that ends up getting used 
everywhere, not just for redrawing but positioning the playhead and the 
selection frame.

Or use a CALayer to store the image - layer drawing is (presumably) already 
highly optimised to only draw the minimum needed and the layer content is 
cached on the GPU as an OpenGL texture so it's really fast.

> And to answer your question, the selection would be selecting frames of
> audio, so that when it is played, the playhead starts there and ends at
> the end of the rectangle.
> 

Sounds like a better approach would be a draggable "frame" that selects a range 
- a bit like iMovie perhaps. That's more complex than drawing a simple 
rectangle of course. But you could start with a simple rect and improve it as 
you went. But how is the actual highlighting achieved? As the rect is dragged 
it will have to repaint parts of the waveform behind it, again calling on the 
mapping between view position and audio time.

If I were designing this, I'd probably have a single NSView subclass that 
handled three things: a) optimally drawing the waveform, b) maintaining and 
drawing the playhead and c) maintaining and drawing a selection rectangle. I 
wouldn't use additional views for the playhead and selection graphics because 
that's just asking for low performance (CALayers however, would be a different 
matter, and probably a good solution, but these can be created as part of a 
single view). Keep it all in one view where it's all nice and simple and you 
can use the fastest drawing techniques you can, such as low-level Core 
Graphics, or CALayers. You can then ask your view questions, such as "where is 
the playhead?", "what is my selection time range?" and so on. Presumably the 
answers to these are used to set propeties on the audio player itself which is 
independent of the view.

--Graham



_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to