Re: [asterisk-users] Fwd: RTP stats explaination
Hi Dave, On Fri, May 18, 2012 at 11:27 PM, Dave Platt wrote: > >> In our app we do not forward packet immediately. After enough packet >> received to increase rtp packetization time (ptime) the we forward the >> message over raw socket and set dscp to be 10 so that this time >> packets can escape iptable rules. >> >>>From client side the RTP stream analysis shows nearly every stream as >> problematic. summery for some streams are given below : >> >> Stream 1: >> >> Max delta = 1758.72 ms at packet no. 40506 >> Max jitter = 231.07 ms. Mean jitter = 9.27 ms. >> Max skew = -2066.18 ms. >> Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0 >> (0.00%) ? Sequence errors = 0 >> Duration 23.45 s (-22628 ms clock drift, corresponding to 281 Hz (-96.49%) >> >> Stream 2: >> >> Max delta = 1750.96 ms at packet no. 45453 >> Max jitter = 230.90 ms. Mean jitter = 7.50 ms. >> Max skew = -2076.96 ms. >> Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0 >> (0.00%) ? Sequence errors = 0 >> Duration 23.46 s (-22715 ms clock drift, corresponding to 253 Hz (-96.84%) >> >> Stream 3: >> >> Max delta = 71.47 ms at packet no. 25009 >> Max jitter = 6.05 ms. Mean jitter = 2.33 ms. >> Max skew = -29.09 ms. >> Total RTP packets = 258 ? (expected 258) ? Lost RTP packets = 0 >> (0.00%) ? Sequence errors = 0 >> Duration 10.28 s (-10181 ms clock drift, corresponding to 76 Hz (-99.05%) >> >> Any idea where should we look for the problem? > > A maximum jitter of 230 milliseconds looks pretty horrendous to me. > This is going to cause really serious audio stuttering on the > receiving side, and/or will force the use of such a long "jitter > buffer" by the receiver that the audio will suffer from an > infuriating amount of delay. Even a local call would sound as if > it's coming from overseas via a satellite-radio link. > > I suspect it's likely due to a combination of two things: > > (1) The fact that you are deliberately delaying the forwarding > of the packets. This adds latency, and if you're forwarding > packets in batches it will also add jitter. > There is no other ways other than doing this. Because we need enough packets to be queued before doing a repacketization feature. Asterisk also does this by "allow:g729:120" in sip.conf. But we have seen that asterisk fails to do that in different circumstances. Because of this we are trying to do it before it goes into asterisk. We can also try learning asterisk development and then try to modify asterisk to meet our needs. But writing a new application seems logical to me, because then we will be bound in one platform. It will be better if this "repacketization" is telephony platform agnostic. We also have some freeswitch boxes, and some propitiatory platform for which we do not have codes. There is a severe limitation in freeswitch regarding this feature because of its dependence on L16 format when communication with transcoder card. Because of this they only support up to 50ms of ptime for g729. and the other platform vendor does not intend to support it either. But this feature is very critical to our operation. If we want to move this application in kernel space by writing kernel module would it help? What are the constraints we need to be aware if we start writing a kernel module to provide this functionality? > (2) Scheduling delays. If your forwarding app fails to run its > code on a very regular schedule - if, for example, it's delayed > or preempted by a higher-priority task, or if some of its code > is paged/swapped out due to memory pressure and has to be paged > back in - this will also add latency and jitter. > > Pushing real-time IP traffic up through the application layer like > this is going to be tricky. You may be able to deal with issue (2) > by locking your app into memory with mlock() and setting it to run > at a "real-time" scheduling priority. > We will test it and post further results. > Issue (1) - well, I really think you need to avoid doing this. > Push the packets down into the kernel for retransmission as quickly > as you can. If you need to rate-limit or rate-pace their sending, > use something like the Linux kernel's traffic-shaping features. > > Is there other network traffic flowing to/from this particular > machine? It's possible that other outbound traffic is saturating > network-transmit buffers somewhere - either in the kernel, or in > an "upstream" communication node such as a router or DSL modem. > If this happens, there's no guarantee that "high priority" or > "expedited delivery" packets would be given priority over > (e.g.) FTP uploads... many routers/switches/modems don't pay > attention to the class-of-service on IP packets. > > To prevent this, you'd need to use traffic shaping features on > your system, to "pace" the transmission of *all* packets so that > the total transmission rate is slightly below the lowest-bandwidth > segment of your uplink. You'd also want to use multiple queue
Re: [asterisk-users] Fwd: RTP stats explaination
On 05/18/2012 12:51 PM, Steve Edwards wrote: On Fri, 18 May 2012, Dave Platt wrote: A maximum jitter of 230 milliseconds looks pretty horrendous to me. This is going to cause really serious audio stuttering on the receiving side, and/or will force the use of such a long "jitter buffer" by the receiver that the audio will suffer from an infuriating amount of delay. Even a local call would sound as if it's coming from overseas via a satellite-radio link. Won't a cell-to-cell call experience delays in the 300ms range? Many moons ago I remember listening with a cell while tapping on the table with another cell and being stunned with the magnitude of the delay and that most people manage to carry on conversations without noticing. Yes, cellular networks have largish latencies, but no jitter. -- Kevin P. Fleming Digium, Inc. | Director of Software Technologies Jabber: kflem...@digium.com | SIP: kpflem...@digium.com | Skype: kpfleming 445 Jan Davis Drive NW - Huntsville, AL 35806 - USA Check us out at www.digium.com & www.asterisk.org -- _ -- Bandwidth and Colocation Provided by http://www.api-digital.com -- New to Asterisk? Join us for a live introductory webinar every Thurs: http://www.asterisk.org/hello asterisk-users mailing list To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [asterisk-users] Fwd: RTP stats explaination
On Fri, 18 May 2012, Dave Platt wrote: A maximum jitter of 230 milliseconds looks pretty horrendous to me. This is going to cause really serious audio stuttering on the receiving side, and/or will force the use of such a long "jitter buffer" by the receiver that the audio will suffer from an infuriating amount of delay. Even a local call would sound as if it's coming from overseas via a satellite-radio link. Won't a cell-to-cell call experience delays in the 300ms range? Many moons ago I remember listening with a cell while tapping on the table with another cell and being stunned with the magnitude of the delay and that most people manage to carry on conversations without noticing. -- Thanks in advance, - Steve Edwards sedwa...@sedwards.com Voice: +1-760-468-3867 PST Newline Fax: +1-760-731-3000 -- _ -- Bandwidth and Colocation Provided by http://www.api-digital.com -- New to Asterisk? Join us for a live introductory webinar every Thurs: http://www.asterisk.org/hello asterisk-users mailing list To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users
Re: [asterisk-users] Fwd: RTP stats explaination
> In our app we do not forward packet immediately. After enough packet > received to increase rtp packetization time (ptime) the we forward the > message over raw socket and set dscp to be 10 so that this time > packets can escape iptable rules. > >>From client side the RTP stream analysis shows nearly every stream as > problematic. summery for some streams are given below : > > Stream 1: > > Max delta = 1758.72 ms at packet no. 40506 > Max jitter = 231.07 ms. Mean jitter = 9.27 ms. > Max skew = -2066.18 ms. > Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0 > (0.00%) ? Sequence errors = 0 > Duration 23.45 s (-22628 ms clock drift, corresponding to 281 Hz (-96.49%) > > Stream 2: > > Max delta = 1750.96 ms at packet no. 45453 > Max jitter = 230.90 ms. Mean jitter = 7.50 ms. > Max skew = -2076.96 ms. > Total RTP packets = 468 ? (expected 468) ? Lost RTP packets = 0 > (0.00%) ? Sequence errors = 0 > Duration 23.46 s (-22715 ms clock drift, corresponding to 253 Hz (-96.84%) > > Stream 3: > > Max delta = 71.47 ms at packet no. 25009 > Max jitter = 6.05 ms. Mean jitter = 2.33 ms. > Max skew = -29.09 ms. > Total RTP packets = 258 ? (expected 258) ? Lost RTP packets = 0 > (0.00%) ? Sequence errors = 0 > Duration 10.28 s (-10181 ms clock drift, corresponding to 76 Hz (-99.05%) > > Any idea where should we look for the problem? A maximum jitter of 230 milliseconds looks pretty horrendous to me. This is going to cause really serious audio stuttering on the receiving side, and/or will force the use of such a long "jitter buffer" by the receiver that the audio will suffer from an infuriating amount of delay. Even a local call would sound as if it's coming from overseas via a satellite-radio link. I suspect it's likely due to a combination of two things: (1) The fact that you are deliberately delaying the forwarding of the packets. This adds latency, and if you're forwarding packets in batches it will also add jitter. (2) Scheduling delays. If your forwarding app fails to run its code on a very regular schedule - if, for example, it's delayed or preempted by a higher-priority task, or if some of its code is paged/swapped out due to memory pressure and has to be paged back in - this will also add latency and jitter. Pushing real-time IP traffic up through the application layer like this is going to be tricky. You may be able to deal with issue (2) by locking your app into memory with mlock() and setting it to run at a "real-time" scheduling priority. Issue (1) - well, I really think you need to avoid doing this. Push the packets down into the kernel for retransmission as quickly as you can. If you need to rate-limit or rate-pace their sending, use something like the Linux kernel's traffic-shaping features. Is there other network traffic flowing to/from this particular machine? It's possible that other outbound traffic is saturating network-transmit buffers somewhere - either in the kernel, or in an "upstream" communication node such as a router or DSL modem. If this happens, there's no guarantee that "high priority" or "expedited delivery" packets would be given priority over (e.g.) FTP uploads... many routers/switches/modems don't pay attention to the class-of-service on IP packets. To prevent this, you'd need to use traffic shaping features on your system, to "pace" the transmission of *all* packets so that the total transmission rate is slightly below the lowest-bandwidth segment of your uplink. You'd also want to use multiple queues to give expedited-deliver packets priority over bulk-data packets. The "Ultimate Linux traffic-shaper" page would show how to accomplish this on a Linux system; the same principles with different details would apply on other operating systems. -- _ -- Bandwidth and Colocation Provided by http://www.api-digital.com -- New to Asterisk? Join us for a live introductory webinar every Thurs: http://www.asterisk.org/hello asterisk-users mailing list To UNSUBSCRIBE or update options visit: http://lists.digium.com/mailman/listinfo/asterisk-users