Re: [sr-dev] [kamailio/kamailio] Segmentation fault on tm:t_should_relay_response (#1875)

2019-02-28 Thread Daniel-Constantin Mierla
Can you get from gdb the output for:

```
frame 0
p *t
```

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1875#issuecomment-468566636___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Segmentation fault on tm:t_should_relay_response (#1875)

2019-02-28 Thread Fernando S. Santos
Hello @miconda , i think this patch introduced a new bug on tmx module.
Now i'm getting a segfault on tmx.so:

### Log
```
[345284.567219] kamailio[88343]: segfault at 1f4 ip 7fa4b771f934 sp 
7ffd06a1e710 error 4 in tmx.so[7fa4b770c000+1d000]
[345332.406311] kamailio[88635]: segfault at 1f4 ip 7fcb136e0934 sp 
7ffeda371e60 error 4 in tmx.so[7fcb136cd000+1d000]
[345488.107701] kamailio[88940]: segfault at 1f4 ip 7f2fffba9934 sp 
7fff2bf8b7f0 error 4 in tmx.so[7f2fffb96000+1d000]
[345517.133371] kamailio[89337]: segfault at 244 ip 7f7ae3d19934 sp 
7fff6f699350 error 4 in tmx.so[7f7ae3d06000+1d000]
[345546.632373] kamailio[89602]: segfault at 1f4 ip 7f02d6019934 sp 
7ffe5d33ac50 error 4 in tmx.so[7f02d6006000+1d000]
[345568.432423] kamailio[89742]: segfault at 1f4 ip 7f4e5094a934 sp 
7fffd5915930 error 4 in tmx.so[7f4e50937000+1d000]
```
###GDB info
```


(gdb) frame 0
#0  0x7f4e5094a934 in pv_get_tm_reply_code (msg=0x7f4e2cd14cb8, 
param=0x7f4e55a61328, res=0x7fffd5915aa0) at t_var.c:528
528 code = t->uac[branch].last_received;


(gdb) info locals
t = 0x7f4e2cd0d928
code = 32590
branch = 0
__FUNCTION__ = "pv_get_tm_reply_code"


(gdb) list
523 if ( (branch=_tmx_tmb.t_get_picked_branch())<0 ) {
524 LM_CRIT("no picked branch (%d) for a final response"
525 " in MODE_ONFAILURE\n", branch);
526 code = 0;
527 } else {
528 code = t->uac[branch].last_received;
529 }
530 break;
531 default:
532 LM_INFO("unsupported route_type %d - code set to 0\n",


(gdb) bt
#0  0x7f4e5094a934 in pv_get_tm_reply_code (msg=0x7f4e2cd14cb8, 
param=0x7f4e55a61328, res=0x7fffd5915aa0) at t_var.c:528
#1  0x005d0874 in pv_get_spec_value (msg=0x7f4e2cd14cb8, 
sp=0x7f4e55a61310, value=0x7fffd5915aa0) at core/pvapi.c:1380
#2  0x00582062 in lval_pvar_assign (h=0x7fffd5916340, 
msg=0x7f4e2cd14cb8, lv=0x7f4e55a61098, rv=0x7f4e55a61308) at core/lvalue.c:335
#3  0x00582d91 in lval_assign (h=0x7fffd5916340, msg=0x7f4e2cd14cb8, 
lv=0x7f4e55a61098, rve=0x7f4e55a61300) at core/lvalue.c:400
#4  0x0059647d in do_action (h=0x7fffd5916340, a=0x7f4e55a61a30, 
msg=0x7f4e2cd14cb8) at core/action.c:1443
#5  0x00597f6e in run_actions (h=0x7fffd5916340, a=0x7f4e55a60d68, 
msg=0x7f4e2cd14cb8) at core/action.c:1564
#6  0x00598683 in run_top_route (a=0x7f4e55a60d68, msg=0x7f4e2cd14cb8, 
c=0x0) at core/action.c:1646
#7  0x7f4e50bb877f in run_failure_handlers (t=0x7f4e2cd0d928, 
rpl=0x, code=408, extra_flags=96) at t_reply.c:1002
#8  0x7f4e50bbbc55 in t_should_relay_response (Trans=0x7f4e2cd0d928, 
new_code=408, branch=0, should_store=0x7fffd59166fc, 
should_relay=0x7fffd5916700, cancel_data=0x7fffd59167b0, 
reply=0x) at t_reply.c:1376
#9  0x7f4e50bbef0b in relay_reply (t=0x7f4e2cd0d928, 
p_msg=0x, branch=0, msg_status=408, cancel_data=0x7fffd59167b0, 
do_put_on_wait=0) at t_reply.c:1802
#10 0x7f4e50c20b5b in fake_reply (t=0x7f4e2cd0d928, branch=0, code=408) at 
timer.c:340
#11 0x7f4e50c20fe8 in final_response_handler (r_buf=0x7f4e2cd0db50, 
t=0x7f4e2cd0d928) at timer.c:506
#12 0x7f4e50c21097 in retr_buf_handler (ticks=262070135, tl=0x7f4e2cd0db70, 
p=0x3e8) at timer.c:562
#13 0x004a0134 in timer_list_expire (t=262070135, h=0x7f4e2c741690, 
slow_l=0x7f4e2c7418c8, slow_mark=0) at core/timer.c:874
#14 0x004a0595 in timer_handler () at core/timer.c:939
#15 0x004a0a3f in timer_main () at core/timer.c:978
#16 0x00425416 in main_loop () at main.c:1693
#17 0x0042c078 in main (argc=9, argv=0x7fffd5916e18) at main.c:2645


(gdb) bt full
#0  0x7f4e5094a934 in pv_get_tm_reply_code (msg=0x7f4e2cd14cb8, 
param=0x7f4e55a61328, res=0x7fffd5915aa0) at t_var.c:528
t = 0x7f4e2cd0d928
code = 32590
branch = 0
__FUNCTION__ = "pv_get_tm_reply_code"
#1  0x005d0874 in pv_get_spec_value (msg=0x7f4e2cd14cb8, 
sp=0x7f4e55a61310, value=0x7fffd5915aa0) at core/pvapi.c:1380
ret = 0
__FUNCTION__ = "pv_get_spec_value"
#2  0x00582062 in lval_pvar_assign (h=0x7fffd5916340, 
msg=0x7f4e2cd14cb8, lv=0x7f4e55a61098, rv=0x7f4e55a61308) at core/lvalue.c:335
pvar = 0x7f4e55a60fb8
pval = {rs = {s = 0x0, len = 0}, ri = 0, flags = 0}
r_avp = 0x7fffd5916178
avp_val = {n = 631, s = {s = 0x277 , len = 
1490070754}, re = 0x277}
ret = 0
v = 110
destroy_pval = 0
__FUNCTION__ = "lval_pvar_assign"
#3  0x00582d91 in lval_assign (h=0x7fffd5916340, msg=0x7f4e2cd14cb8, 
lv=0x7f4e55a61098, rve=0x7f4e55a61300) at core/lvalue.c:400
rv = 0x7f4e55a61308
ret = 0
__FUNCTION__ = "lval_assign"
#4  0x0059647d in do_action (h=0x7fffd5916340, 

Re: [sr-dev] [kamailio/kamailio] Infinite loop inside htable module during dmq synch (#1863)

2019-02-28 Thread Charles Chance
Closed #1863.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1863#event-2172429887___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Infinite loop inside htable module during dmq synch (#1863)

2019-02-28 Thread Charles Chance
Fix has been merged/backported. Please reopen if still experiencing issues.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1863#issuecomment-468454254___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Segmentation fault on tm:t_should_relay_response (#1875)

2019-02-28 Thread Fernando S. Santos
I'll apply this patch tonight and see if this is fixed tomorrow on a production 
server and provides a feedback about this issue.

Thanks for your fast response.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1875#issuecomment-468413449___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Segmentation fault on tm:t_should_relay_response (#1875)

2019-02-28 Thread Daniel-Constantin Mierla
Likely to be the same issue I hunted recently and actually I just pushed a 
commit for it:

  * 814d5cc1f4f5b1e4b95737108dffc1e7d7bd566f

No much testing so far, will do more tomorrow -- anyhow, hopefully it fixes the 
issue.

In what I troubleshooted, the crash happened due to a race in accessing 
transaction when a reply for a terminated transaction (which already had a 
final reply received before) came at the moment wait timer was fired for that 
transaction (5sec later than the final reply), which resulted in destroying the 
transaction by the timer process, while another process was handling the late 
reply.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1875#issuecomment-468405036___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] git:5.2:cb58a13f: htable: fix infinite loop during dmq sync of large tables

2019-02-28 Thread Charles Chance
Module: kamailio
Branch: 5.2
Commit: cb58a13f3a7594e721c08ef9aff458108e05db57
URL: 
https://github.com/kamailio/kamailio/commit/cb58a13f3a7594e721c08ef9aff458108e05db57

Author: Charles Chance 
Committer: Charles Chance 
Date: 2019-02-28T19:17:36Z

htable: fix infinite loop during dmq sync of large tables

- reported by Enrico Bandiera (GH #1863)

(cherry picked from commit a176ad4fb4167e21b01974e6a5caba330b1d7e14)

---

Modified: src/modules/htable/ht_dmq.c

---

Diff:  
https://github.com/kamailio/kamailio/commit/cb58a13f3a7594e721c08ef9aff458108e05db57.diff
Patch: 
https://github.com/kamailio/kamailio/commit/cb58a13f3a7594e721c08ef9aff458108e05db57.patch

---

diff --git a/src/modules/htable/ht_dmq.c b/src/modules/htable/ht_dmq.c
index 986c2c769a..1f27e97684 100644
--- a/src/modules/htable/ht_dmq.c
+++ b/src/modules/htable/ht_dmq.c
@@ -139,36 +139,43 @@ static int ht_dmq_cell_group_flush(dmq_node_t* node) {
 
srjson_doc_t *jdoc = _dmq_jdoc_cell_group.jdoc;
srjson_t *jdoc_cells = ht_dmq_jdoc_cell_group.jdoc_cells;
+   int ret = 0;
 
srjson_AddItemToObject(jdoc, jdoc->root, "cells", jdoc_cells);
 
-   LM_DBG("json[%s]\n", srjson_PrintUnformatted(jdoc, jdoc->root));
+   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
jdoc->buf.s = srjson_PrintUnformatted(jdoc, jdoc->root);
if(jdoc->buf.s==NULL) {
LM_ERR("unable to serialize data\n");
-   return -1;
+   ret = -1;
+   goto cleanup;
}
jdoc->buf.len = strlen(jdoc->buf.s);
 
LM_DBG("sending serialized data %.*s\n", jdoc->buf.len, jdoc->buf.s);
if (ht_dmq_send(>buf, node)!=0) {
LM_ERR("unable to send data\n");
-   return -1;
+   ret = -1;
}
 
-   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
+cleanup:
+
+   srjson_DeleteItemFromObject(jdoc, jdoc->root, "cells");
+   ht_dmq_jdoc_cell_group.count = 0;
+   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
+
+   if(jdoc->buf.s!=NULL) {
+   jdoc->free_fn(jdoc->buf.s);
+   jdoc->buf.s = NULL;
+   }
 
-   srjson_Delete(jdoc, jdoc_cells);
ht_dmq_jdoc_cell_group.jdoc_cells = 
srjson_CreateArray(_dmq_jdoc_cell_group.jdoc);
if (ht_dmq_jdoc_cell_group.jdoc_cells==NULL) {
LM_ERR("cannot re-create json cells array! \n");
-   return -1;
+   ret = -1;
}
 
-   ht_dmq_jdoc_cell_group.count = 0;
-   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
-
-   return 0;
+   return ret;
 }
 
 static void ht_dmq_cell_group_destroy() {


___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] git:master:814d5cc1: tm: put back t in wait timer if still referenced more than once

2019-02-28 Thread Daniel-Constantin Mierla
Module: kamailio
Branch: master
Commit: 814d5cc1f4f5b1e4b95737108dffc1e7d7bd566f
URL: 
https://github.com/kamailio/kamailio/commit/814d5cc1f4f5b1e4b95737108dffc1e7d7bd566f

Author: Daniel-Constantin Mierla 
Committer: Daniel-Constantin Mierla 
Date: 2019-02-28T20:17:34+01:00

tm: put back t in wait timer if still referenced more than once

- have a safety upper limit for putting back in wait timer
- special credits to Yufei Tao for testing and helping to troubleshoot

---

Modified: src/modules/tm/h_table.c
Modified: src/modules/tm/h_table.h
Modified: src/modules/tm/t_funcs.c
Modified: src/modules/tm/t_funcs.h
Modified: src/modules/tm/timer.c

---

Diff:  
https://github.com/kamailio/kamailio/commit/814d5cc1f4f5b1e4b95737108dffc1e7d7bd566f.diff
Patch: 
https://github.com/kamailio/kamailio/commit/814d5cc1f4f5b1e4b95737108dffc1e7d7bd566f.patch


___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] git:master:7702fba4: Merge pull request #1872 from kamailio/cchance/htable_dmq_fix

2019-02-28 Thread GitHub
Module: kamailio
Branch: master
Commit: 7702fba4845fae8defe80ac739055b83e0123fac
URL: 
https://github.com/kamailio/kamailio/commit/7702fba4845fae8defe80ac739055b83e0123fac

Author: Charles Chance 
Committer: GitHub 
Date: 2019-02-28T19:11:44Z

Merge pull request #1872 from kamailio/cchance/htable_dmq_fix

htable: fix infinite loop during dmq sync of large tables

---

Modified: src/modules/htable/ht_dmq.c

---

Diff:  
https://github.com/kamailio/kamailio/commit/7702fba4845fae8defe80ac739055b83e0123fac.diff
Patch: 
https://github.com/kamailio/kamailio/commit/7702fba4845fae8defe80ac739055b83e0123fac.patch

---

diff --git a/src/modules/htable/ht_dmq.c b/src/modules/htable/ht_dmq.c
index 986c2c769a..1f27e97684 100644
--- a/src/modules/htable/ht_dmq.c
+++ b/src/modules/htable/ht_dmq.c
@@ -139,36 +139,43 @@ static int ht_dmq_cell_group_flush(dmq_node_t* node) {
 
srjson_doc_t *jdoc = _dmq_jdoc_cell_group.jdoc;
srjson_t *jdoc_cells = ht_dmq_jdoc_cell_group.jdoc_cells;
+   int ret = 0;
 
srjson_AddItemToObject(jdoc, jdoc->root, "cells", jdoc_cells);
 
-   LM_DBG("json[%s]\n", srjson_PrintUnformatted(jdoc, jdoc->root));
+   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
jdoc->buf.s = srjson_PrintUnformatted(jdoc, jdoc->root);
if(jdoc->buf.s==NULL) {
LM_ERR("unable to serialize data\n");
-   return -1;
+   ret = -1;
+   goto cleanup;
}
jdoc->buf.len = strlen(jdoc->buf.s);
 
LM_DBG("sending serialized data %.*s\n", jdoc->buf.len, jdoc->buf.s);
if (ht_dmq_send(>buf, node)!=0) {
LM_ERR("unable to send data\n");
-   return -1;
+   ret = -1;
}
 
-   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
+cleanup:
+
+   srjson_DeleteItemFromObject(jdoc, jdoc->root, "cells");
+   ht_dmq_jdoc_cell_group.count = 0;
+   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
+
+   if(jdoc->buf.s!=NULL) {
+   jdoc->free_fn(jdoc->buf.s);
+   jdoc->buf.s = NULL;
+   }
 
-   srjson_Delete(jdoc, jdoc_cells);
ht_dmq_jdoc_cell_group.jdoc_cells = 
srjson_CreateArray(_dmq_jdoc_cell_group.jdoc);
if (ht_dmq_jdoc_cell_group.jdoc_cells==NULL) {
LM_ERR("cannot re-create json cells array! \n");
-   return -1;
+   ret = -1;
}
 
-   ht_dmq_jdoc_cell_group.count = 0;
-   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
-
-   return 0;
+   return ret;
 }
 
 static void ht_dmq_cell_group_destroy() {


___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] htable: fix infinite loop during dmq sync of large tables (#1872)

2019-02-28 Thread Charles Chance
Merged #1872 into master.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/1872#event-2172034830___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] git:master:a176ad4f: htable: fix infinite loop during dmq sync of large tables

2019-02-28 Thread Charles Chance
Module: kamailio
Branch: master
Commit: a176ad4fb4167e21b01974e6a5caba330b1d7e14
URL: 
https://github.com/kamailio/kamailio/commit/a176ad4fb4167e21b01974e6a5caba330b1d7e14

Author: Charles Chance 
Committer: Charles Chance 
Date: 2019-02-21T19:27:32Z

htable: fix infinite loop during dmq sync of large tables

- reported by Enrico Bandiera (GH #1863)

---

Modified: src/modules/htable/ht_dmq.c

---

Diff:  
https://github.com/kamailio/kamailio/commit/a176ad4fb4167e21b01974e6a5caba330b1d7e14.diff
Patch: 
https://github.com/kamailio/kamailio/commit/a176ad4fb4167e21b01974e6a5caba330b1d7e14.patch

---

diff --git a/src/modules/htable/ht_dmq.c b/src/modules/htable/ht_dmq.c
index 986c2c769a..1f27e97684 100644
--- a/src/modules/htable/ht_dmq.c
+++ b/src/modules/htable/ht_dmq.c
@@ -139,36 +139,43 @@ static int ht_dmq_cell_group_flush(dmq_node_t* node) {
 
srjson_doc_t *jdoc = _dmq_jdoc_cell_group.jdoc;
srjson_t *jdoc_cells = ht_dmq_jdoc_cell_group.jdoc_cells;
+   int ret = 0;
 
srjson_AddItemToObject(jdoc, jdoc->root, "cells", jdoc_cells);
 
-   LM_DBG("json[%s]\n", srjson_PrintUnformatted(jdoc, jdoc->root));
+   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
jdoc->buf.s = srjson_PrintUnformatted(jdoc, jdoc->root);
if(jdoc->buf.s==NULL) {
LM_ERR("unable to serialize data\n");
-   return -1;
+   ret = -1;
+   goto cleanup;
}
jdoc->buf.len = strlen(jdoc->buf.s);
 
LM_DBG("sending serialized data %.*s\n", jdoc->buf.len, jdoc->buf.s);
if (ht_dmq_send(>buf, node)!=0) {
LM_ERR("unable to send data\n");
-   return -1;
+   ret = -1;
}
 
-   LM_DBG("jdoc size[%d]\n", ht_dmq_jdoc_cell_group.size);
+cleanup:
+
+   srjson_DeleteItemFromObject(jdoc, jdoc->root, "cells");
+   ht_dmq_jdoc_cell_group.count = 0;
+   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
+
+   if(jdoc->buf.s!=NULL) {
+   jdoc->free_fn(jdoc->buf.s);
+   jdoc->buf.s = NULL;
+   }
 
-   srjson_Delete(jdoc, jdoc_cells);
ht_dmq_jdoc_cell_group.jdoc_cells = 
srjson_CreateArray(_dmq_jdoc_cell_group.jdoc);
if (ht_dmq_jdoc_cell_group.jdoc_cells==NULL) {
LM_ERR("cannot re-create json cells array! \n");
-   return -1;
+   ret = -1;
}
 
-   ht_dmq_jdoc_cell_group.count = 0;
-   ht_dmq_jdoc_cell_group.size = dmq_cell_group_empty_size;
-
-   return 0;
+   return ret;
 }
 
 static void ht_dmq_cell_group_destroy() {


___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] htable: fix infinite loop during dmq sync of large tables (#1872)

2019-02-28 Thread Charles Chance
Thanks @miconda

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/1872#issuecomment-468398828___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] [kamailio/kamailio] Segmentation fault on tm:t_should_relay_response (#1875)

2019-02-28 Thread Fernando S. Santos


### Description

I have a segfault in an average of 3 to 6 per day on tm module always on the 
t_should_relay_response.
I'm using kamailio 5.2.0 (x86_64/linux) 535e13 on CentOS Linux release 7.6.1810 
(Core) x86_64 runing on a XenServer.

### Troubleshooting

No troubleshooting was done, since it happened on a production server. We 
simply restarted the server.

 Reproduction

The problem is random and has happened a couple of times per day. My kamailio 
uses tm, dialog, htable. All calls is using topoh. The server run with an 
average of 1000-1200 concurrent calls. I've seen this segfault happen with less 
than 400 concurrent calls too. 
One very curious thing, when the fifth or sixth time the segfault occurs, it 
does not happen again, even reaching 3000 concurrent calls.

 Debugging Data



```
(gdb) bt
#0  0x7f685674a84d in t_should_relay_response (Trans=0x7f683d86a3e0, 
new_code=503, branch=0, should_store=0x7fffed6c323c, 
should_relay=0x7fffed6c3240, cancel_data=0x7fffed6c3490, reply=0x7f685b6cafa0) 
at t_reply.c:1279
#1  0x7f685674ee6b in relay_reply (t=0x7f683d86a3e0, p_msg=0x7f685b6cafa0, 
branch=0, msg_status=503, cancel_data=0x7fffed6c3490, do_put_on_wait=1) at 
t_reply.c:1804
#2  0x7f6856754ac3 in reply_received (p_msg=0x7f685b6cafa0) at 
t_reply.c:2563
#3  0x005291a9 in do_forward_reply (msg=0x7f685b6cafa0, mode=0) at 
core/forward.c:747
#4  0x0052ad21 in forward_reply (msg=0x7f685b6cafa0) at 
core/forward.c:852
#5  0x0059e89e in receive_msg (
buf=0xa6c2a0  "SIP/2.0 503 Service Unavailable\r\nVia: 
SIP/2.0/UDP 
X.X.X.38:5060;TH=ucv;branch=z9hG4bKf581.18dae6287b86c3ef95ba52c12f282865.0\r\nVia:
 SIP/2.0/UDP X.X.X.69:5060;received=X.X.X.69;TH=div;bra"..., len=565, 
rcv_info=0x7fffed6c3c30) at core/receive.c:433
#6  0x00481690 in udp_rcv_loop () at core/udp_server.c:541
#7  0x00424a27 in main_loop () at main.c:1621
#8  0x0042c078 in main (argc=9, argv=0x7fffed6c4178) at main.c:2645

(gdb) list
1274/* except the exception above, too late  messages will 
be discarded */
1275goto discard;
1276}
1277
1278/* if final response received at this branch, allow only INVITE 
2xx */
1279if (Trans->uac[branch].last_received>=200 
1280&& !(inv_through && 
Trans->uac[branch].last_received<300)) {
1281/* don't report on retransmissions */
1282if (Trans->uac[branch].last_received==new_code) {
1283LM_DBG("final reply retransmission\n");
1284goto discard;

(gdb) info locals
branch_cnt = 32767
picked_code = 0
new_branch = -311676496
inv_through = 0
extra_flags = 0
i = 32616
replies_dropped = 1143247665
__FUNCTION__ = "t_should_relay_response"

(gdb) bt full
#0  0x7f685674a84d in t_should_relay_response (Trans=0x7f683d86a3e0, 
new_code=503, branch=0, should_store=0x7fffed6c323c, 
should_relay=0x7fffed6c3240, cancel_data=0x7fffed6c3490, reply=0x7f685b6cafa0) 
at t_reply.c:1279
branch_cnt = 32767
picked_code = 0
new_branch = -311676496
inv_through = 0
extra_flags = 0
i = 32616
replies_dropped = 1143247665
__FUNCTION__ = "t_should_relay_response"
#1  0x7f685674ee6b in relay_reply (t=0x7f683d86a3e0, p_msg=0x7f685b6cafa0, 
branch=0, msg_status=503, cancel_data=0x7fffed6c3490, do_put_on_wait=1) at 
t_reply.c:1804
relay = -311676288
save_clone = 0
buf = 0x0
res_len = 0
relayed_code = 0
relayed_msg = 0x0
reply_bak = 0x7fffed6c3290
bm = {to_tag_val = {s = 0x1 , len = 
1032234408}}
totag_retr = 0
reply_status = RPS_ERROR
uas_rb = 0x68924f 
to_tag = 0x7f685e8ffed0 <__syslog>
reason = {s = 0x0, len = 0}
onsend_params = {req = 0x7f685b6cafa0, rpl = 0x7f685b597870, param = 
0x7f68567e0c50, code = 0, flags = 2288, branch = 0, t_rbuf = 0x7f68567e6edb 
<__FUNCTION__.12468>, dst = 0x7f68567e23ab, send_buf = {
s = 0x7f68440006f0 "SIP/2.0 183 Session Progress\r\nVia: 
SIP/2.0/UDP 
X.X.X.35:5060;received=X.X.X.35;TH=div;rport=5060;branch=z9hG4bK-97cc84683b6011e980ec0cc47a0ad35a;sig=7970278d\r\nVia:
 SIP/2.0/UDP X.X.X.35:"..., len = 841211904}}
ip = {af = 3983291088, len = 32767, u = {addrl = {5867152, 11367808}, 
addr32 = {5867152, 0, 11367808, 0}, addr16 = {34448, 89, 0, 0, 30080, 173, 0, 
0}, addr = "\220\206Y\000\000\000\000\000\200u\255\000\000\000\000"}}
__FUNCTION__ = "relay_reply"
#2  0x7f6856754ac3 in reply_received (p_msg=0x7f685b6cafa0) at 
t_reply.c:2563
msg_status = 503
last_uac_status = 408
ack = 0x7f68440006f0 "SIP/2.0 183 Session Progress\r\nVia: SIP/2.0/UDP 
X.X.X.35:5060;received=X.X.X.35;TH=div;rport=5060;branch=z9hG4bK-97cc84683b6011e980ec0cc47a0ad35a;sig=7970278d\r\nVia:
 SIP/2.0/UDP X.X.X.35:"...

Re: [sr-dev] [kamailio/kamailio] htable: fix infinite loop during dmq sync of large tables (#1872)

2019-02-28 Thread Daniel-Constantin Mierla
If it a fix for an issue affecting the stable versions, then needs to be 
backported.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/1872#issuecomment-468360164___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] htable: fix infinite loop during dmq sync of large tables (#1872)

2019-02-28 Thread Charles Chance
Thanks @giacomovaccavonage - ok to backport, too?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/1872#issuecomment-468353044___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] htable: fix infinite loop during dmq sync of large tables (#1872)

2019-02-28 Thread Giacomo Vacca
giacomovaccavonage approved this pull request.

Given the feedback for the tests I saw, this looks good to me.



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/1872#pullrequestreview-209188680___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Infinite loop inside htable module during dmq synch (#1863)

2019-02-28 Thread Charles Chance
@paolovisintin - as soon as the above PR (#1872) is merged (will also be 
backported).

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1863#issuecomment-468347911___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Infinite loop inside htable module during dmq synch (#1863)

2019-02-28 Thread paolovisintin
Hello, 
sorry for bothering, just to know what timetable we should expect regarding 
this issue; (unfortunately) actually we have a production deployment with this 
latent behaviour that can be experienced simply restarting one instance of 
kamailio. 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1863#issuecomment-468344345___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


[sr-dev] [kamailio/kamailio] Kamailio crash while concurrent dialplan.reload (#1874)

2019-02-28 Thread Dmitri Savolainen
 Reproduction

run this script simultaneously (from two console for example)
```
#!/usr/bin/perl
$cmd = q(curl -X GET -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", 
"method": "dialplan.reload", "params":[],  "id": 1 }' http://IP:PORT/jsonrpc/);
for ($i=0; $i<1000; $i++){
`$cmd`;
}
```

 Debugging Data
```
Program terminated with signal 11, Segmentation fault.
#0  0x7ffaa39e95f7 in add_rule2hash (rule=0x7ffa9fa05248, h_index=1) at 
dp_db.c:563
/home/snen/KamailioGitMyFork/src/modules/dialplan/dp_db.c:563:12474:beg:0x7ffaa39e95f7
(gdb) bt
#0  0x7ffaa39e95f7 in add_rule2hash (rule=0x7ffa9fa05248, h_index=1) at 
dp_db.c:563
#1  0x7ffaa39e4831 in dp_load_db () at dp_db.c:305
#2  0x7ffaa39ce914 in dialplan_rpc_reload (rpc=0x7ffa9f382a80 , 
ctx=0x7ffa9f3829a0 <_jsonrpc_ctx_global>) at dialplan.c:610
#3  0x7ffa9f16398f in jsonrpc_dispatch (msg=0x7ffd427ad880, s1=0x0, s2=0x0) 
at jsonrpcs_mod.c:1294
#4  0x00461a5f in do_action (h=0x7ffd427ad7a0, a=0x7ffaa72987d8, 
msg=0x7ffd427ad880) at core/action.c:1067
#5  0x0046e23e in run_actions (h=0x7ffd427ad7a0, a=0x7ffaa72987d8, 
msg=0x7ffd427ad880) at core/action.c:1564
#6  0x004619ce in do_action (h=0x7ffd427ad7a0, a=0x7ffaa729b140, 
msg=0x7ffd427ad880) at core/action.c:1058
#7  0x0046e23e in run_actions (h=0x7ffd427ad7a0, a=0x7ffaa729b140, 
msg=0x7ffd427ad880) at core/action.c:1564
#8  0x7ffa9f38868c in xhttp_process_request (orig_msg=0x7ffaa72a6ed0, 
new_buf=0x7ffaa72c4420 "GET /jsonrpc/ HTTP/1.1\r\nVia: SIP/2.0/TCP 
192.168.10.190:48542\r\nUser-Agent: curl/7.29.0\r\nHost: 
192.168.10.190:5071\r\nAccept: */*\r\nContent-Type: 
application/json\r\nContent-Length: 71\r\n\r\n{\"jsonrpc\": \"2.0\","..., 
new_len=253) at xhttp_mod.c:296
#9  0x7ffa9f389dcf in xhttp_handler (msg=0x7ffaa72a6ed0) at xhttp_mod.c:383
#10 0x004ff30c in nonsip_msg_run_hooks (msg=0x7ffaa72a6ed0) at 
core/nonsip_hooks.c:112
#11 0x0057b373 in receive_msg (
buf=0x7ffaa02ec618 "GET /jsonrpc/ HTTP/1.1\r\nUser-Agent: 
curl/7.29.0\r\nHost: 192.168.10.190:5071\r\nAccept: */*\r\nContent-Type: 
application/json\r\nContent-Length: 71\r\n\r\n{\"jsonrpc\": \"2.0\", 
\"method\": \"dialplan.reload\", \"params\":"..., len=214, 
rcv_info=0x7ffaa02ec338) at core/receive.c:270
#12 0x00635eb5 in receive_tcp_msg (
tcpbuf=0x7ffaa02ec618 "GET /jsonrpc/ HTTP/1.1\r\nUser-Agent: 
curl/7.29.0\r\nHost: 192.168.10.190:5071\r\nAccept: */*\r\nContent-Type: 
application/json\r\nContent-Length: 71\r\n\r\n{\"jsonrpc\": \"2.0\", 
\"method\": \"dialplan.reload\", \"params\":"..., len=214, 
rcv_info=0x7ffaa02ec338, con=0x7ffaa02ec320) at core/tcp_read.c:1399
#13 0x00638372 in tcp_read_req (con=0x7ffaa02ec320, 
bytes_read=0x7ffd427ae65c, read_flags=0x7ffd427ae658) at core/tcp_read.c:1631
#14 0x0063b115 in handle_io (fm=0x7ffaa7302d78, events=1, idx=-1) at 
core/tcp_read.c:1804
#15 0x006299b6 in io_wait_loop_epoll (h=0xae9060 , t=2, repeat=0) 
at core/io_wait.h:1062
#16 0x0063d172 in tcp_receive_loop (unix_sock=16) at 
core/tcp_read.c:1974
#17 0x004dcaf9 in tcp_init_children () at core/tcp_main.c:5086
#18 0x004266c4 in main_loop () at main.c:1750
#19 0x0042d14b in main (argc=13, argv=0x7ffd427aedc8) at main.c:2737
(gdb) l
558 new_id = 0;
559 
560 /*search for the corresponding dpl_id*/
561 for(crt_idp = last_idp =rules_hash[h_index]; crt_idp!= NULL; 
562 last_idp = crt_idp, crt_idp = crt_idp->next)
563 if(crt_idp->dp_id == rule->dpid)
564 break;
565 
566 /*didn't find a dpl_id*/
567 if(!crt_idp){
(gdb) p crt_idp->dp_id
Cannot access memory at address 0x7ffa9f002429
(gdb) 

```


### Additional Information

```
kamailio 5.3.0-dev3 (x86_64/linux) d726bd
```



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1874___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Segfault when processing select variable via kemi (#1829)

2019-02-28 Thread Thomas Weber
@miconda great! Glad i could help. 
I will try it out in a few days. Cu at Kamailio World

Cheers

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1829#issuecomment-468222085___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] Memory allocator was called from cdp: diameter_avp.c:365 (#1851)

2019-02-28 Thread Denys
Hello!
Issue is fixed! Thanks!

Now it shows error without crashing.
```
Feb 28 11:15:08 kamailio /usr/sbin/kamailio[10918]: ERROR: cdp 
[peerstatemachine.c:634]: I_Snd_CER(): I_Snd_CER(): Error on finding local host 
address > Socket operation on non-socket
Feb 28 11:15:08 kamailio /usr/sbin/kamailio[10918]: ERROR: cdp 
[peerstatemachine.c:674]: add_peer_application(): Too many applications for 
this peer (max 5), not adding Application 4:10415.
```


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1851#issuecomment-468219059___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev


Re: [sr-dev] [kamailio/kamailio] kamailio crashes when attempting to query offline database (#1821)

2019-02-28 Thread Daniel-Constantin Mierla
Can you get from gdb the output for next commands:

```
frame 1
p sc
p *sc
```

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1821#issuecomment-468182004___
Kamailio (SER) - Development Mailing List
sr-dev@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev