Hi there!  I've got a video chat app that switches between using the
subtype kAudioUnitSubType_VoiceProcessingIO io unit and the subtype
kAudioUnitSubType_RemoteIO io unit depending on context (watching a video /
listening to music while video chatting or not).  I would love to update to
the new AVAudioEngine API or even the new AUAudioUnit APIs, but I'm hitting
a few road blocks.

First off, the output and input nodes of AVAudioEngine are generated
implicitly and are not to be mucked with - changing the AVAudioSession
category and mode to PlayAndRecord and VideoChat does nothing to alter the
generated input and output nodes - they are still reporting as
kAudioUnitSubType_RemoteIO.  If AUGraph is deprecated next year, I don't
see how I could switch over to AVAudioEngine if this isn't addressed - what
is a video chat app to do?  Should I just file a feature request, continue
to use AUGraph, and wait?

Secondly, I thought after watching a WWDC video from 2016 maybe I could put
together a graph-less chain of AUAudioUnits using AURenderBlocks.  Setting
up an IO unit is rather straightforward, but connecting it to, say, a
multi-channel mixer audio unit is not going exactly as expected. Here's an
example of what I'm trying to do:

- (void)setup {
    NSError *error;

    // io unit

    AudioComponentDescription voipIODesc = (AudioComponentDescription) {
        .componentType = kAudioUnitType_Output,
        .componentSubType = kAudioUnitSubType_VoiceProcessingIO,
        .componentManufacturer =kAudioUnitManufacturer_Apple,
    };

    AUAudioUnit *voipIOUnit = [[AUAudioUnit alloc]
initWithComponentDescription:voipIODesc error:&error];
    if (error) {
        NSLog(@"error creating voip unit");
    }
    voipIOUnit.inputEnabled = YES;
    voipIOUnit.outputEnabled = YES;

    AVAudioFormat *renderFormat = [[AVAudioFormat alloc]
initStandardFormatWithSampleRate:44100.0 channels:2];
    [voipIOUnit.inputBusses[0] setFormat:renderFormat error:&error];
    if (error) {
        NSLog(@"error setting output format");
    }


    // mixer unit

    AudioComponentDescription mixerDesc = (AudioComponentDescription) {
        .componentType = kAudioUnitType_Mixer,
        .componentSubType = kAudioUnitSubType_MultiChannelMixer,
        .componentManufacturer =kAudioUnitManufacturer_Apple,
    };

    AUAudioUnit *mixerAUAudioUnit = [[AUAudioUnit alloc]
initWithComponentDescription:mixerDesc error:&error];
    if (error) {
        NSLog(@"error creating mixerUnit");
    }

    for (AUAudioUnitBus *bus in mixerAUAudioUnit.outputBusses)
        [bus setFormat:renderFormat error:&error];
        if (error) {
            NSLog(@"error setting mixer output bus format");
        }
    }

    [mixerAUAudioUnit.inputBusses setBusCount:2 error:&error];
    if (error) {
        NSLog(@"error setting mixer input bus count");
    }
    for (AUAudioUnitBus *bus in mixerAUAudioUnit.inputBusses) {
        [bus setFormat:renderFormat error:&error];
        if (error) {
            NSLog(@"error setting mixer input bus");
        }
    }


    // render blocks

    AURenderPullInputBlock mixerRenderPullInputBlock =
^AUAudioUnitStatus(AudioUnitRenderActionFlags *actionFlags, const
AudioTimeStamp *timestamp, AUAudioFrameCount frameCount, NSInteger
inputBusNumber, AudioBufferList *inputData) {
        NSLog(@"rendering input block");
        return noErr;
    };
    AURenderBlock mixerRenderBlock = mixerAUAudioUnit.renderBlock;


    // set output provider for io unit

    voipIOUnit.outputProvider =
^AUAudioUnitStatus(AudioUnitRenderActionFlags * _Nonnull actionFlags, const
AudioTimeStamp * _Nonnull timestamp, AUAudioFrameCount frameCount,
NSInteger inputBusNumber, AudioBufferList * _Nonnull inputData) {
        OSStatus status = mixerRenderBlock(actionFlags, timestamp,
frameCount, 0, inputData, mixerRenderPullInputBlock);
        if (status) {
            // some error rendering
        }

        NSLog(@"output provider");

        return noErr;
    };

    [voipIOUnit allocateRenderResourcesAndReturnError:&error];
    if (error) {
        NSLog(@"error allocating render resources");
    }

    [mixerAUAudioUnit allocateRenderResourcesAndReturnError:&error];
    if (error) {
        NSLog(@"error allocating render resources for mixer");
    }

    [voipIOUnit startHardwareAndReturnError:&error];
    if (error) {
        NSLog(@"error starting hardware");
    }
}


So, when I run this example the output provider is definitely running, and
the mixerRenderBlock is returning noErr, but mixerRenderPullInputBlock is
never called.  Interestingly, if I set a render notify block on the mixer
unit, it is being called pre and post render.  Am I missing something?  In
my AUGraph setup, I've set input callbacks for the various input busses for
my mixer, but I can't figure out an analogous way to do that without using
AUGraph.

Any pointers or advice would be greatly appreciated!

Thanks,
Adam Bellard
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (Coreaudio-api@lists.apple.com)
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to