Mauro,I would follow Vadim on this one.

There are 2 cases that should be checked :
  - 1/ really be careful about Linux and MacOSX sound drivers. I agree
windows drivers always give you the "standard" framesizes you ask but this
is not the case for Linux or MacOSX
  - 2/ pay attention to the far-end media mixing during 3-way conference.
Payloads may arrive on different end points with different framesizes.
mixing must occur at a shared framesize.

The current phapi approach is optimized in many cases I think (less
intermediary buffers = less memory usage) and I think that a dynamic graph
construction 'a la gstreamer' would be a + :
 * no buffers when they are not needed (and for embedded targets they are
more than often are not needed)
 * buffers for some scenarios

So a step by step approach could be to add an entity with a meta-description
of the graph inside phapi to known which graph elements are activated and
which are not, with what parameters. I don't know about the mediastreamer
(ortp side project) integration that Vadim was talking about recently so I
can't really give you more help on this one.

Hope this helps,
Jerome


Jerome



2009/7/28 Vadim Lebedev <[email protected]>

>  IMO this is the correct way...
>
>
> Thanks
> Vadim
> Mauro Sergio Ferreira Brasil wrote:
>
> Hi Jerome!
>
> I've asked information about how common is that condition because I had no
> problem here initializing the audio devices of the same soundcard with
> different framesizes.
> I've made lots of test calls using my onboard soundcard on Windows with
> some variety of framesize scenarios, like: 40ms on TX and 20ms on RX; 30ms
> on both paths (iLBC's "mode" parameter config), etc.
>
> Now, considering that I can't rely on anything, I suppose the best choice
> is to get back the audio devices initialization blocked (i.e. hardcoded) to
> work with the framesize resulting of 20ms packetization for both paths.
> This will avoid lots of changes and the inevitable demanded tests.
>
> We initialize incoming and outcoming audio paths to work always with
> framesize of 160 shorts (or 320 if we work at 16 KHz) - that is the way it
> used to work before the patch I've sent, and create buffer adaption routines
> to both paths oh "phmedia-audio.c" in order to process the incoming and
> outgoing data accordingly.
>
> What do you think ?
>
> Thanks and best regards,
> Mauro.
>
>
>
>
> jerome wagner escreveu:
>
> Hello, basicall, from what I remember (and from what I understand of your
> question), you can't rely on anything.
>
>  MIC and SPK accepted framesizes would typically be the same if you work
> on the same soundcard.
> but as soon as you work with different devices (MIC integrated on your
> webcam & SPK on your standard soundcard) accepted framesizes will be
> different.
>
>  MacOSX API has an integrated resampler in their sound API that can
> resample things for you from the "physical" framesizes (hardware driven) to
> the "logical" framesizes (those available via software invocation)
>
>  you have no other choice than go down a "preferred" list and ask the
> devices if they accept what you would like. You might end up with different
> framesizes on MIC and SPK.
>
>  Once the negociation with MIC and SPK is done, you have to propagate the
> choice in the audio graph so the every every component interface has a known
> framesize.
>
>  I hope this helps
> Jerome
>
>
>
>
>
>
> 2009/7/28 Mauro Sergio Ferreira Brasil <[email protected]>
>
>> Hi Jerome!
>>
>> Thanks for the graphs!
>> They are extremely enlightening indeed.
>>
>> Just to clear things up, do you know whether the abnormal condition we
>> need to deal is :
>>
>> 1- Sometimes, input and output audio devices can't operate with different
>> framesizes; or
>> 2- Sometimes, input or output audio devices can't operate with framesizes
>> different of the resulting 20ms packetization ?
>>
>> I think the only condition we need to consider is given by item 1, but I
>> need to be sure that item 2 isn't possible as well.
>>
>> Do you have conditions to indicate the limitations we need to consider
>> (just 1, just 2, or both) ?
>> Can you tell me how common is this condition ?
>>
>> Thanks and best regards,
>> Mauro.
>>
>>
>>
>>  jerome wagner escreveu:
>>
>> Hello, Just in case it helps and before it gets lost, I digged those
>> diagrams out of hard drive. They come from an analysis I did on the phapi
>> audioflows about 1 year ago. They clarified for me some of the code entry
>> points.
>> I can send the original .vsd if needed.
>> Jerome
>>
>>
>> 2009/7/27 Mauro Sergio Ferreira Brasil <[email protected]>
>>
>>> Hi Vadim!
>>>
>>> I was not aware of such condition.
>>> I understand and agree with you that, considering this, we need to make
>>> the necessary changes to handle this difference outside the scope of audio
>>> device initialization/manipulation.
>>>
>>> So, when you said that they are "unable to work with different framesize"
>>> you mean that the output and input audio devices can't operate at different
>>> framesizes (like input at 20ms and output at 30ms), right ?
>>>
>>> Anyway, do you have any suggestion on where to place the buffer adaption
>>> codes to outgoing and incoming audio paths ?
>>> Making some tests, I perceived audio traffic only through methods
>>> "ph_audio_play_cbk" (used to play network incoming audio) and
>>> "ph_audio_rec_cbk" (used to send audio device audio to network).
>>>
>>> Thanks and best regards,
>>> Mauro.
>>>
>>>
>>>
>>>
>>> Vadim Lebedev escreveu:
>>>
>>> Mauro,
>>>
>>> I've been thinking some more about this packetization issue...
>>> And i think that there is no need to have 2 different framesizes for the
>>> simply reason that sometimes we have
>>> audio devices that are simply unable to work with different framesize.
>>> The framesize adaptation should be done externally and not
>>> xxx_stream_yyy  routines  IMO
>>>
>>>
>>> Thanks
>>> Vadim
>>>
>>>
>>>
>>>
>>> Mauro Sergio Ferreira Brasil wrote:
>>>
>>> Hi Vadim!
>>>
>>> The patch with changes as requested by you to not create dependency from
>>> "eXosip" with "phapi" follows attached.
>>> It was built over trunk version retrieved yesterday.
>>>
>>> The only point I think will demand some consideration is regarding the
>>> changes on file "phmedia-portaudio.c" so the audio devices could be
>>> configured with different numbers of frames per buffer for incoming and
>>> outgoing paths.
>>>
>>> In order to maintain the "XXX_stream_open" method signature, I choose to
>>> still make use of "framesize" input parameter adding comments on
>>> "open_audio_device" method (from "phmedia-audio.c" file), and inside
>>> "pa_stream_open" to indicate the dependency between the framesize informed
>>> on "audio_stream_open" call and the other calculated inside "pa_stream_open"
>>> method.
>>> Gettting things short, I inform the incoming audio framesize using the
>>> current available parameter, and calculate the outgoing framesize using: 1-
>>> the incoming framesize; 2- the incoming packetization and outgoing
>>> packetization given by "phastream_t" structure. Using an inverse logic, I've
>>> applied the incoming framesize on output device creation and vice-versa.
>>>
>>> In fact, IMHO the desirable approach would be to change "XXX_stream_open"
>>> signature to have two framesize parameters (indicating differently the
>>> framesize for input and output paths) what will lead to changes on all
>>> "phmedia-XXX" implementations, like portaudio, alsa, etc.
>>> This approach is desirable because will force the signature changing for
>>> all implementations and the respective care with the internal changes
>>> demanded.
>>>
>>> Anyway, I choose the easy way what caused some dependency between the
>>> "phmedia-audio.c" and "phmedia-portaudio.c" files that were not replicated
>>> by me to other implementations besides portaudio that is the one we use
>>> here.
>>>
>>> Please let me know if you have any disagreements with that.
>>>
>>> I'll be waiting some reply from you.
>>>
>>> Thanks and best regards,
>>>
>>> --
>>>   *At.,
>>>    *     *Technology and Quality on Information*  Mauro Sérgio Ferreira
>>> Brasil  Coordenador de Projetos e Analista de Sistemas  +
>>> [email protected] <@tqi.com.br>  :  www.tqi.com.br  ( + 55
>>> (34)3291-1700  ( + 55 (34)9971-2572
>>>
>>>
>>>
>>> --
>>>   *At.,
>>>    *     *Technology and Quality on Information*  Mauro Sérgio Ferreira
>>> Brasil  Coordenador de Projetos e Analista de Sistemas  +
>>> [email protected] <@tqi.com.br>  :  www.tqi.com.br  ( + 55
>>> (34)3291-1700  ( + 55 (34)9971-2572
>>>
>>> _______________________________________________
>>> QuteCom-dev mailing list
>>> [email protected]
>>> http://lists.qutecom.org/mailman/listinfo/qutecom-dev
>>>
>>>
>>
>> --
>>   *At.,
>>    *     *Technology and Quality on Information*  Mauro Sérgio Ferreira
>> Brasil  Coordenador de Projetos e Analista de Sistemas  +
>> [email protected] <@tqi.com.br>  :  www.tqi.com.br  ( + 55
>> (34)3291-1700  ( + 55 (34)9971-2572
>>
>
>
> --
>   *At.,
>    *     *Technology and Quality on Information*  Mauro Sérgio Ferreira
> Brasil  Coordenador de Projetos e Analista de Sistemas  +
> [email protected] <@tqi.com.br>  :  www.tqi.com.br  ( + 55
> (34)3291-1700  ( + 55 (34)9971-2572
>
>
>

<<image/jpeg>>

<<image/jpeg>>

_______________________________________________
QuteCom-dev mailing list
[email protected]
http://lists.qutecom.org/mailman/listinfo/qutecom-dev

Reply via email to