More details on the way I compute positions in audio files :

First I get the file processing format. Next, for each start/end of audio
sections in my file, I compute a frame position like so:

position = AVAudioFramePosition(time * format.sampleRate)

where time is a value in seconds.

Now I will break my audio sections in 3 parts :

[.buffer ] - [ segment ] - [ buffer ]

Start and end buffers are of length 1024, and I allocate them with that
size.
Then I compute a frame start position for the segment : segmentStart =
sectionStartPosition + 1024
I do the same for the end frame position of the segment : segmentEnd
= sectionEndPosition - 1024
I deduce the length in frames of the segment being :
AVAudioFrameCount(segmentEnd - segmentStart)

Is that right ?

2017-03-31 9:04 GMT+02:00 Vincent CARLIER <[email protected]>:

> Hello all,
>
> I'm writing an application where I should play parts of audio files. Each
> audio file contains audio data for a separate track.
> These parts are sections with a begin time and a end time, and I'm trying
> to play those parts in the order I choose.
>
> So for example, imagine I have 4 sections :
>
> A - B - C - D
>
> and I activate B and D, I want to play, B, then D, then B again, then D,
> etc..
>
> To make smooth 'jumps" in playback I think it's important to fade in/out
> start/end sections buffers.
>
> So, I have a basic AVAudioEngine setup, with AVAudioPlayerNode, and a
> mixer.
> For each audio section, I cache some information :
>
> - a buffer for the first samples in the section (which I fade in manually)
> - a tuple for the AVAudioFramePosition, and AVAudioFrameCount of a middle
> segment
> - a buffer for the end samples in the audio section (which I fade out
> manually)
>
> now, when I schedule a section for playing, I say the AVAudioPlayerNode :
>
> - schedule the start buffer (scheduleBuffer(_:completionHandler:) no
> option)
> - schedule the middle segment (scheduleSegment(_:
> startingFrame:frameCount:at:completionHandler:))
> - finally schedule the end buffer (scheduleBuffer(_:completionHandler:)
> no option)
>
> all at "time" nil.
>
> The problem here is I can hear clic, and crappy sounds at audio sections
> boundaries and I can't see where I'm doing wrong.
> My first idea was the fades I do manually (basically multiplying sample
> values by a volume factor), but same result without doing that.
> I thought I didn't schedule in time, but scheduling sections in advance, A
> - B - C for example beforehand has the same result.
> I then tried different frame position computations, with audio format
> settings, same result.
>
> So I'm out of ideas here, and perhaps I didn't get the schedule mechanism
> right.
>
> Can anyone confirm I can mix scheduling buffers and segments in
> AVAudioPlayerNode ? or should I schedule only buffers or segments ?
> I can confirm that scheduling only segments works, playback is perfectly
> fine.
>
>
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to