I am not too familiar with the sipXmedia architecture yet, but a lot of this
sounds like what may be implemented by "Virtual Audio Cable" (
http://software.muzychenko.net/eng/vac.html). Are we planning a superset of
this functionality or something totally different here?

On 3/21/07, Alexander Chemeris <[EMAIL PROTECTED]> wrote:

Hello,

As you may see we're working on new audio input/output framework for
sipXmediaLib. Audio input part of new framwork is at the middle of design
and
already partially implemented. Now it's the time for audio output part.
Here
I'll present first draft of its design.

So, the key idea and the most exciting goal of new framework is support of
multiple input and output devices, multiple connections to them and
support
switch of audio devices in runtime. Each device either input or output is
presented with separate object with unified interface. Device objects are
not accessed directly, but incapsulated to so called connecions to enable
concurrent access from several flowgraphs. Connections, in turn, are
incapsulated to device managers, which provide multi-thread
synchronization
and hide device management complexity.

Audio output framework is similar to audio input framework, but have some
significant differences. Each audio output driver could work in two modes:
direct write mode and non direct write (mixer) mode. In direct write mode
data is pushed to device as soon as it become available. In mixer mode
data
will be buffered and then pulled by device itself. Direct write mode have
less
latency, but could be fed by one source only. If two sources will try to
push
data, only one will succeed. In opposite to direct write mode, mixer mode
is
supposed to accept several sreams and mix them. In this mode device should
pull data when needed, as device manager do not have clocks and do not
know
the time to push next frame.

In mixer mode device have simple circular buffer of samples associated
with it.
This buffer is driven by respective AudioOutputConnection, which is
responsible
for mixing incoming data and handling of all exception situations. In this
mode
device driver should provide timing for pulling data, so callback or
thread
mode are appropriate. In direct write mode data passed to
AudioOutputConnetion
is simply pushed to device, so device driver might work in any mode, but
asynchronous write mode seems native in this case.

Further, I have some doubts on synchronization scheme we've selected.
Now we have all synchronization done in MpOutputDeviceManager and
AudioOutputConnection do not provide synchronization at all. This leads to
exposing pullFrame() interface, intended to device driver only, to anyone
access. I prefer to hide such methods and do not expose them to end user.
Natural place for pullFrame() is AudioOutputConnection, but it could not
be
put there while AudioOutputConnection is not thread safe. There is one
more
problem with exposing pullFrame() on MpOutputDeviceManager level - it will
block waiting for other MpOutputDeviceManager methods to finish, while
this
call should be realtime.

PS Draft1 version of MpOutputDeviceDriver and MpOutputDeviceManager
interfaces was checked in at revision 9121 to sipXtapi svn branch.

--
Regards,
Alexander Chemeris.

SIPez LLC.
SIP VoIP, IM and Presence Consulting
http://www.SIPez.com
tel: +1 (617) 273-4000

_______________________________________________
sipxtapi-dev mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipxtapi-dev/


_______________________________________________
sipxtapi-dev mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipxtapi-dev/

Reply via email to