Hi, I have glad you guys found a solution. Thanks for sharing it.
Regards, Julien On Wed, Aug 22, 2018, 21:43 Rogelio Perez <roge...@telnyx.com> wrote: > Hi Julien, > > Thanks for checking on this. > I've been working in the background with Charles on this issue and we > think we've found a solution, although the cause isn't clear to me yet. > Following Charles advice we changed the usrloc module parameter db_mode > from 1 (Write-Through) to 2 (Write-Back) and there's been no more memory > leaks incidents since then. > I'll report back if we have any further updates. > > Best, > Rogelio > > > On Wed, Aug 22, 2018 at 11:39 AM Julien Chavanton <jchavan...@gmail.com> > wrote: > >> Hi Rogerio, did you have any luck digging this leak further ? >> >> On Wed, Aug 8, 2018 at 3:37 AM Charles Chance < >> charles.cha...@sipcentric.com> wrote: >> >>> Hi Rogelio, >>> >>> I have been running master on a three-node lab (one primary, two >>> secondary) for the past 24 hours or so, maintaining 2000 registrations on >>> the primary, replicating to both secondaries, and memory usage has remained >>> constant throughout. >>> >>> I will leave it running for another 24 hours to be sure but in the >>> meantime, you mentioned you are loading records from DB - which mode are >>> you using for writing (write-through or write-back)? Do you experience the >>> same symptoms if you disable the database completely on the secondary nodes >>> (or just one for testing) and instead, enable sync in dmq_usrloc? >>> >>> Cheers, >>> >>> Charles >>> >>> >>> On 7 August 2018 at 16:42, Julien Chavanton <jchavan...@gmail.com> >>> wrote: >>> >>>> I wonder if this could be introduced by a regression or if you are >>>> facing a specific edge case >>>> >>>> I briefly looked at the commits of DMQ and DMQ_USRLOC >>>> It seems there was significant work done. >>>> I would give a try with 5.0.0 and then we will at least learn that this >>>> is not a recent regression. >>>> >>>> On Mon, Aug 6, 2018 at 1:43 PM, Rogelio Perez <roge...@telnyx.com> >>>> wrote: >>>> >>>>> Charles, Julien, Daniel, >>>>> >>>>> The results are pretty much the same, the mem leak is still there and >>>>> we need to restart Kamailio when it reaches certain threshold. >>>>> https://www.dropbox.com/s/enxx6b7t0c8vl49/Selection_539.png?dl=0 >>>>> >>>>> Is there anything else we can try? >>>>> Will a core dump file tell us what's causing it? >>>>> >>>>> Thanks, >>>>> Rogelio >>>>> >>>>> On Thu, Aug 2, 2018 at 2:57 PM Rogelio Perez <roge...@telnyx.com> >>>>> wrote: >>>>> >>>>>> Thanks Charles, it's working now. >>>>>> I'm deploying to production and confirming results soon. >>>>>> >>>>>> Rogelio >>>>>> >>>>> >>>>> >>>>> -- >>>>> <https://telnyx.com> >>>>> Rogelio Perez | engineering | telnyx <https://telnyx.com> >>>>> chicago: +1 312 270 8119 | dublin: +353 1 912 6119 >>>>> >>>>> >>>>> _______________________________________________ >>>>> Kamailio (SER) - Users Mailing List >>>>> sr-users@lists.kamailio.org >>>>> https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users >>>>> >>>>> >>>> >>> >>> >>> >>> Sipcentric Ltd. Company registered in England & Wales no. 7365592. >>> Registered >>> office: Faraday Wharf, Innovation Birmingham Campus, Holt Street, >>> Birmingham Science Park, Birmingham B7 4BB. >>> _______________________________________________ >>> Kamailio (SER) - Users Mailing List >>> sr-users@lists.kamailio.org >>> https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users >>> >> > > -- > <https://telnyx.com> > Rogelio Perez | engineering | telnyx <https://telnyx.com> > chicago: +1 312 270 8119 | dublin: +353 1 912 6119 > >
_______________________________________________ Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users