Hi Vadim!

But... do you agree, considering what I've exposed on prior message, that this method will always return the value given by "codec->decoded_framesize" despite of channel (concept added on last patch), right ?

I mean... considering that we will make the entire "phapi" core work with 20ms packetization the value returned by this method will always be the values used on all codecs (phcodec_t structure for all codecs) initialization, right ?
Because both channels will work with same framesize that will be always the associated with 20ms packetization.
On the scope of use of this method, of course (RTP packets will still flow through network with negotiated framesizes).

If your answer is "YES", no problem with that.
In fact, this will reduce the number of changes.

I'm just not certain if it's signature should be returned to prior version that don't consider channels (incoming and outcoming), now that this will be useless.
Or keep this channel parameter to reduce changes and still leave it for some possible future use.
What do you think ?

Just to synchronize thoughts...

Thanks and best regards,
Mauro.



Vadim Lebedev escreveu:
Mauro,

It's agood plan, except that is is BAD idea to remove "ph_astream_decoded_framesize_get" calls....
Please keep them....

Thanks
Vadim

Mauro Sergio Ferreira Brasil wrote:
Hi Guys!

Let me explain why have I made that question so maybe we can get to a conclusion here.

The point is quite simple: I want to remove all those "reformatation" blocks on "ph_media_retrieve_decoded_frame" method because I think they won't be necessary any more.
And they won't be necessary because of the solution I'll propose here to the "device with same framesize" issue.

Having some thought about this problem I realized that the simplest and easier solution is to get all "phapi" core to the way it was before, blocking all device initialization, buffer allocation, and code/decode stuff to operate always considering a framesize correspondent to 20ms packetization.
Methods that use "ph_astream_decoded_framesize_get" today will use "codec->decoded_framesize" directly.

The changes to codec initialization will be removed too, because we will use code/decode routines only with buffers which their encoded and decoded sizes correspond to 20ms packetization.

The rest of the magic will happen with adaption buffers that will be used on incoming and outcoming edges of "phapi" with "ortp".
For example: for incoming (RX) path, the adaption buffer code will be placed on "ph_media_retrieve_decoded_frame" method so it process and return only a buffer with the correspondent size of 20ms packetization, keeping the rest of the current RTP packet to be processed on next call.

I have already validated that point and it seems it will work pretty fine, but I still have to validate the outcoming (TX) point.

Any of you have any disagreements or questions about this approach ?
Can you point some problems that maybe I couldn't see yet ?

Thanks and best regards,
Mauro.




jerome wagner escreveu:
Hi
I maybe wrong but I remember this happening sometimes during codec renegociation (early media like ring tone negociated by a server, and then media is negociated with the endpoint using another codec).
Jerome


2009/7/30 Vadim Lebedev <[email protected]>
Hi Mauro

Le 29 juil. 09 à 20:01, Mauro Sergio Ferreira Brasil a écrit :

Hello there!

I was checking method "ph_media_retrieve_decoded_frame" in order to make this final adjustment, and I've got stuck on some considerable blocks of code intended to handle a condition where the size of RTP packet received from the other part is bigger than expected.
This is indicated by variable "needreformat".

Is this intended to handle situations where the other part of conversation don't respect the negotiated packetization ?
For example, we indicated preferable packetization through "ptime:20" attribute, but the other part keeps sending RTP packets with a bigger frame size than expected.

Is for this only reason that these code are used, or am I missing some other situation that will demand such "reformatation" ?


Yes, it is the only reason...



Thanks and best regards,
Mauro.




jerome wagner escreveu:
Mauro,
I would follow Vadim on this one.

There are 2 cases that should be checked :
  - 1/ really be careful about Linux and MacOSX sound drivers. I agree windows drivers always give you the "standard" framesizes you ask but this is not the case for Linux or MacOSX
  - 2/ pay attention to the far-end media mixing during 3-way conference. Payloads may arrive on different end points with different framesizes. mixing must occur at a shared framesize.

The current phapi approach is optimized in many cases I think (less intermediary buffers = less memory usage) and I think that a dynamic graph construction 'a la gstreamer' would be a + : 
 * no buffers when they are not needed (and for embedded targets they are more than often are not needed)
 * buffers for some scenarios

So a step by step approach could be to add an entity with a meta-description of the graph inside phapi to known which graph elements are activated and which are not, with what parameters. I don't know about the mediastreamer (ortp side project) integration that Vadim was talking about recently so I can't really give you more help on this one.

Hope this helps,
Jerome


Jerome



2009/7/28 Vadim Lebedev <[email protected]>
IMO this is the correct way...


Thanks
Vadim
Mauro Sergio Ferreira Brasil wrote:
Hi Jerome!

I've asked information about how common is that condition because I had no problem here initializing the audio devices of the same soundcard with different framesizes.
I've made lots of test calls using my onboard soundcard on Windows with some variety of framesize scenarios, like: 40ms on TX and 20ms on RX; 30ms on both paths (iLBC's "mode" parameter config), etc.

Now, considering that I can't rely on anything, I suppose the best choice is to get back the audio devices initialization blocked (i.e. hardcoded) to work with the framesize resulting of 20ms packetization for both paths.
This will avoid lots of changes and the inevitable demanded tests.

We initialize incoming and outcoming audio paths to work always with framesize of 160 shorts (or 320 if we work at 16 KHz) - that is the way it used to work before the patch I've sent, and create buffer adaption routines to both paths oh "phmedia-audio.c" in order to process the incoming and outgoing data accordingly.

What do you think ?

Thanks and best regards,
Mauro.


--
At.,                                                                                                                               
<CMMI_2.jpg> 
Technology and Quality on Information
Mauro Sérgio Ferreira Brasil
Coordenador de Projetos e Analista de Sistemas
+ [email protected]
: www.tqi.com.br
( + 55 (34)3291-1700
( + 55 (34)9971-2572



--
TQI - Technology and Quality on Information
At.,                                                                                                                               
 
Technology and Quality on Information
Mauro Sérgio Ferreira Brasil
Coordenador de Projetos e Analista de Sistemas
+ [email protected]
: www.tqi.com.br
( + 55 (34)3291-1700
( + 55 (34)9971-2572


--
TQI - Technology and Quality on Information
At.,                                                                                                                               
 
Technology and Quality on Information
Mauro Sérgio Ferreira Brasil
Coordenador de Projetos e Analista de Sistemas
+ [email protected]
: www.tqi.com.br
( + 55 (34)3291-1700
( + 55 (34)9971-2572
_______________________________________________
QuteCom-dev mailing list
[email protected]
http://lists.qutecom.org/mailman/listinfo/qutecom-dev

Reply via email to