Hi Mauro
Le 29 juil. 09 à 20:01, Mauro Sergio Ferreira Brasil a écrit :
Hello there!
I was checking method "ph_media_retrieve_decoded_frame" in order to
make this final adjustment, and I've got stuck on some considerable
blocks of code intended to handle a condition where the size of RTP
packet received from the other part is bigger than expected.
This is indicated by variable "needreformat".
Is this intended to handle situations where the other part of
conversation don't respect the negotiated packetization ?
For example, we indicated preferable packetization through "ptime:
20" attribute, but the other part keeps sending RTP packets with a
bigger frame size than expected.
Is for this only reason that these code are used, or am I missing
some other situation that will demand such "reformatation" ?
Yes, it is the only reason...
Thanks and best regards,
Mauro.
jerome wagner escreveu:
Mauro,
I would follow Vadim on this one.
There are 2 cases that should be checked :
- 1/ really be careful about Linux and MacOSX sound drivers. I
agree windows drivers always give you the "standard" framesizes you
ask but this is not the case for Linux or MacOSX
- 2/ pay attention to the far-end media mixing during 3-way
conference. Payloads may arrive on different end points with
different framesizes. mixing must occur at a shared framesize.
The current phapi approach is optimized in many cases I think (less
intermediary buffers = less memory usage) and I think that a
dynamic graph construction 'a la gstreamer' would be a + :
* no buffers when they are not needed (and for embedded targets
they are more than often are not needed)
* buffers for some scenarios
So a step by step approach could be to add an entity with a meta-
description of the graph inside phapi to known which graph elements
are activated and which are not, with what parameters. I don't know
about the mediastreamer (ortp side project) integration that Vadim
was talking about recently so I can't really give you more help on
this one.
Hope this helps,
Jerome
Jerome
2009/7/28 Vadim Lebedev <[email protected]>
IMO this is the correct way...
Thanks
Vadim
Mauro Sergio Ferreira Brasil wrote:
Hi Jerome!
I've asked information about how common is that condition because
I had no problem here initializing the audio devices of the same
soundcard with different framesizes.
I've made lots of test calls using my onboard soundcard on Windows
with some variety of framesize scenarios, like: 40ms on TX and
20ms on RX; 30ms on both paths (iLBC's "mode" parameter config),
etc.
Now, considering that I can't rely on anything, I suppose the best
choice is to get back the audio devices initialization blocked
(i.e. hardcoded) to work with the framesize resulting of 20ms
packetization for both paths.
This will avoid lots of changes and the inevitable demanded tests.
We initialize incoming and outcoming audio paths to work always
with framesize of 160 shorts (or 320 if we work at 16 KHz) - that
is the way it used to work before the patch I've sent, and create
buffer adaption routines to both paths oh "phmedia-audio.c" in
order to process the incoming and outgoing data accordingly.
What do you think ?
Thanks and best regards,
Mauro.
--
At.,
<CMMI_2.jpg>
Technology and Quality on Information
Mauro Sérgio Ferreira Brasil
Coordenador de Projetos e Analista de Sistemas
+ [email protected]
: www.tqi.com.br
( + 55 (34)3291-1700
( + 55 (34)9971-2572
_______________________________________________
QuteCom-dev mailing list
[email protected]
http://lists.qutecom.org/mailman/listinfo/qutecom-dev