Re: [OpenSIPS-Users] 3.2.3 TLS issue

2021-11-12 Thread Andrew Yager
Do you have a core dump with the backtrace when it dies?

There are a few fixes in the nightly releases around some TLS things, and
we've found these to be a better choice for SSL performance.

Thanks,
Andrew

On Fri, 12 Nov 2021 at 02:30, Gregory Massel  wrote:

> Thanks.
>
> This report lists it as a bug in WolfSSL, however, given that I'm got
> similar errors (although not a segfault) when changing to OpenSSL, it would
> appear more likely that it's in proto_tls or tls_mgm.
>
> --Greg
> On 2021-11-11 17:06, Mark Farmer wrote:
>
> Confirmed here too, 3.2.2 is running fine.
>
> There does seem to be a bug report about this:
>
> https://github.com/OpenSIPS/opensips/issues/2667
>
>
> On Thu, 11 Nov 2021 at 14:33, Gregory Massel  wrote:
>
>> I've managed to downgrade to 3.2.2 and all is working again. So this
>> appears to be a bug introduced in 3.2.3.
>>
>> Environment is:
>>
>> Linux msteams 5.4.0-90-generic #101~18.04.1-Ubuntu SMP Fri Oct 22
>> 09:25:04 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
>>
>> The downgrade/fix was with:
>>
>> apt-get install opensips=3.2.2-1 opensips-dialplan-module=3.2.2-1
>> opensips-http-modules=3.2.2-1 opensips-mysql-module=3.2.2-1
>> opensips-presence-modules=3.2.2-1 opensips-regex-module=3.2.2-1
>> opensips-tls-module=3.2.2-1 opensips-tls-openssl-module=3.2.2-1
>> opensips-tls-wolfssl-module=3.2.2-1 opensips-tlsmgm-module=3.2.2-1
>>
>> --Greg
>> On 2021-11-11 15:56, Gregory Massel wrote:
>>
>> I'm running OpenSIPS 3.2.3 and it keeps bombing out on SSL connections.
>>
>> With WolfSSL it segfaults:
>>
>> Nov 11 11:52:04 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:04 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 0, verify success
>> Nov 11 11:52:04 msteams /usr/sbin/opensips[15322]: 
>> INFO:tls_wolfssl:_wolfssl_tls_async_connect: new TLS connection to 
>> 52.114.75.24:5061 established
>> Nov 11 11:52:04 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:04 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 0, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15322]: 
>> INFO:tls_wolfssl:_wolfssl_tls_async_connect: new TLS connection to 
>> 52.114.132.46:5061 established
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15328]: CRITICAL:core:sig_usr: 
>> segfault in process pid: 15328, id: 25
>> Nov 11 11:52:05 msteams kernel: [22403546.537543] opensips[15328]: segfault 
>> at 35 ip 7ff7b4b3f790 sp 7fff48dd0a30 error 4 in 
>> tls_wolfssl.so[7ff7b4a71000+1e3000]
>> Nov 11 11:52:05 msteams kernel: [22403546.537549] Code: ff ff e9 5b ff ff ff 
>> 0f 1f 00 53 48 8d 3d 08 40 3b 00 e8 53 09 00 00 85 c0 75 4f 48 8b 3d 50 40 
>> 3b 00 48 85 ff 74 14 0f 1f 00 <48> 8b 5f 08 e8 87 36 f7 ff 48 85 db 48 89 df 
>> 75
>> ef 5b 48 8d 3d d7
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15322]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 0, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15322]: 
>> INFO:tls_wolfssl:_wolfssl_tls_accept: New TLS connection from 
>> 52.114.75.24:17152 accepted
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15322]: 
>> INFO:tls_wolfssl:tls_dump_cert_info: tls_accept: client TLS certificate 
>> subject: /CN=sip.pstnhub.microsoft.com, issuer: /C=US/O=Microsoft 
>> Corporation/CN=Microsoft RSA TLS CA 01
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15325]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15325]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 0, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15329]: CRITICAL:core:sig_usr: 
>> segfault in process pid: 15329, id: 26
>> Nov 11 11:52:05 msteams kernel: [22403546.732270] traps: opensips[15329] 
>> general protection fault ip:7ff7b4b0953d sp:7fff48dd0760 error:0 in 
>> tls_wolfssl.so[7ff7b4a71000+1e3000]
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15326]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15326]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 0, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15325]: 
>> INFO:tls_wolfssl:_wolfssl_tls_async_connect: new TLS connection to 
>> 54.171.127.194:5061 established
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15330]: CRITICAL:core:sig_usr: 
>> segfault in process pid: 15330, id: 27
>> Nov 11 11:52:05 msteams kernel: [22403546.801626] traps: opensips[15330] 
>> general protection fault ip:7ff7b4b3f790 sp:7fff48dd0a30 error:0 in 
>> tls_wolfssl.so[7ff7b4a71000+1e3000]
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15326]: 
>> NOTICE:tls_wolfssl:verify_callback: depth = 1, verify success
>> Nov 11 11:52:05 msteams /usr/sbin/opensips[15326]: 
>> 

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-18 Thread Andrew Yager
For those following along, this appears to have been an issue with Auto
Scaling. Once disabled, the problem has not reoccured. In our case this is
a perfectly acceptable way to move forward, but maybe at some stage we'll
come back to working this out.

Andrew


On Sat, 9 Oct 2021 at 01:39, Andrew Yager  wrote:

> Process 24103 (the blocked proc) last logged at 19:02:01, and in doing
> so, it had:
>
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:t_lookup_request: start
> searching: hash=34811, isACK=0
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:matching_3261: RFC3261
> transaction matching failed
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:t_lookup_request: no
> transaction found
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:statistics:w_update_stat:
> needed statistic is 
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:statistics:parse_groupname:
> group: '', name: 'cpbx_presence_notify_rcv'
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:select_contacts: ct:
> sip:[snip]@[snip]:55087;transport=tls
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:push_branch: setting msg
> R-URI 
> Oct  8 19:02:01 hvprxy osips[24103]: [RELAY] Performing mid-registrar
> lookup on sip:[snip]:5060
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:lookup: '' Not found in
> usrloc
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:statistics:w_update_stat:
> needed statistic is 
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:statistics:parse_groupname:
> group: '', name: 'cpbx_contact_lookup_failed'
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:select_contacts: ct:
> sip:[snip]@[snip]:55087;transport=tls
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:push_branch: setting msg
> R-URI 
> Oct  8 19:02:01 hvprxy osips[24103]: [RELAY] Performed
> mid_registrar_lookup: sip:xxx@xxx:55087;transport=tls
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:t_newtran: transaction on
> entrance=(nil)
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:parse_headers:
> flags=
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:parse_headers: flags=78
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:t_lookup_request: start
> searching: hash=34811, isACK=0
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:matching_3261: RFC3261
> transaction matching failed
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:t_lookup_request: no
> transaction found
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:tm:run_reqin_callbacks:
> trans=0x7fafa71d8150, callback type 1, id 0 entered
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:mk_proxy: doing DNS lookup...
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:parse_headers: flags=2000
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:parse_headers:
> flags=
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:tcp_conn_get: con found in
> state 0
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:tcp_conn_get: tcp
> connection found (0x7fafa84997e8), acquiring fd
> Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:tcp_conn_get: c=
> 0x7fafa84997e8, n=16, Usock=647
> < no more >
>
> Proc 11468 (tcp main thread) logged:
>
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> getsockopt: snd is initially 425984
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> setting snd buf to 851968 had no effect
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> setting snd buf to 428032 had no effect
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> using snd buffer of 416 kb
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:init_sock_keepalive: TCP
> keepalive enabled on socket 636
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:print_ip: tcpconn_new:
> new tcp connection to: xxx
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:tcpconn_new: on port
> 11934, proto 3
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:tcpconn_add: hashes: 1004,
> 320
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:handle_new_connect: new
> connection: 0x7fafa7cc0c80 636 flags: 001c
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:send2worker: no free tcp
> receiver, connection passed to the least busy one (proc #1, 3 con)
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:send2worker: to tcp
> worker 1 (0), 0x7fafa7cc0c80 rw 1
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> getsockopt: snd is initially 425984
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> setting snd buf to 851968 had no effect
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:
> setting snd buf to 428032 had no effect
> Oct  8 19:02:01 hvprxy osips[11468]: DBG:core:probe_max_sock_buff:

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-08 Thread Andrew Yager
esting things…

Oct  8 19:01:55 hvprxy osips[11397]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:01:55 hvprxy osips[11397]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=555
Oct  8 19:01:55 hvprxy osips[11397]: DBG:core:tcp_conn_get: after
receive_fd: c= 0x7fafa84997e8 n=8 fd=5
Oct  8 19:01:55 hvprxy osips[11397]: DBG:proto_tls:proto_tls_send:
after write: c=0x7fafa84997e8 n=483 fd=5
Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:02:01 hvprxy osips[24103]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=647
Oct  8 19:02:13 hvprxy osips[11398]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:02:13 hvprxy osips[11398]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=557
Oct  8 19:02:13 hvprxy osips[11398]: DBG:core:tcp_conn_get: after
receive_fd: c= 0x7fafa84997e8 n=8 fd=5
Oct  8 19:02:13 hvprxy osips[11398]: DBG:proto_tls:proto_tls_send:
after write: c=0x7fafa84997e8 n=360 fd=5
Oct  8 19:02:13 hvprxy osips[11460]: DBG:proto_tls:tcp_handle_req:
Nothing more to read on TCP conn 0x7fafa84997e8, currently in state 0
Oct  8 19:02:43 hvprxy osips[11398]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:02:43 hvprxy osips[11398]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=557
Oct  8 19:02:43 hvprxy osips[11398]: DBG:core:tcp_conn_get: after
receive_fd: c= 0x7fafa84997e8 n=8 fd=5
Oct  8 19:02:43 hvprxy osips[11398]: DBG:proto_tls:proto_tls_send:
after write: c=0x7fafa84997e8 n=360 fd=5
Oct  8 19:02:43 hvprxy osips[11460]: DBG:proto_tls:tcp_handle_req:
Nothing more to read on TCP conn 0x7fafa84997e8, currently in state 0
Oct  8 19:02:44 hvprxy osips[11460]: DBG:proto_tls:tcp_handle_req:
Nothing more to read on TCP conn 0x7fafa84997e8, currently in state 0
Oct  8 19:02:44 hvprxy osips[24113]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:02:44 hvprxy osips[24113]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=649
Oct  8 19:03:13 hvprxy osips[11458]: DBG:core:tcp_conn_get: tcp
connection found (0x7fafa84997e8), acquiring fd
Oct  8 19:03:13 hvprxy osips[11458]: DBG:core:tcp_conn_get: c=
0x7fafa84997e8, n=16, Usock=625
Oct  8 19:03:13 hvprxy osips[11458]: DBG:core:tcp_conn_get: after
receive_fd: c= 0x7fafa84997e8 n=8 fd=950
Oct  8 19:03:13 hvprxy osips[11458]: DBG:proto_tls:proto_tls_send:
after write: c=0x7fafa84997e8 n=360 fd=950
Oct  8 19:03:13 hvprxy osips[11460]: DBG:proto_tls:tcp_handle_req:
Nothing more to read on TCP conn 0x7fafa84997e8, currently in state 0

All this said, I can't say I'm totally sure I know what I would be
looking for. Nothing seems to stand out in the logs to me; but there
are a few gigs of logs so it's hard to quite know what I'm looking
for.

Andrew


On Sat, 9 Oct 2021 at 00:19, Bogdan-Andrei Iancu  wrote:
>
> Hi Andrew,
>
> The second blocked process (doing the TLS/TCP stuff) surprisingly got
> stuck while waiting for a TCP fd from the TCP Main process.
>
> You mentioned that the logs of the UDP worker (doing the TCP send)
> suddenly stopped - around that time, do you see any errors from that
> process or from the TCP MAIN processes ?
>
> Regards,
>
> Bogdan-Andrei Iancu
>
> OpenSIPS Founder and Developer
>https://www.opensips-solutions.com
> OpenSIPS eBootcamp 2021
>https://opensips.org/training/OpenSIPS_eBootcamp_2021/
>
> On 10/8/21 2:43 PM, Andrew Yager wrote:
> > Hi Bogdan-Andrei,
> >
> > Have restarted since the last bt, but have recreated again and
> > attached. Earlier today we did also get another bt full on the second
> > blocked pid, but I didn't save it. In that case it was a UDP reply
> > from one of our upstream servers that had gone through mid_registrar
> > and was being relayed to a TCP endpoint. The TCP endpoint did have an
> > open file descriptor we could see, and it had sent and was blocked on
> > receive at the same point (I'm getting better at reading backtraces!
> > :D).
> >
> > The thing I do note is happening is that every example I have is a UDP
> > message being received from an upstream server being relayed to a
> > client on a TCP/TLS connection via a UDP worker.
> >
> > While we are using WolfSSL in this box, the other box where we have
> > the same behaviour (but I haven't taken backtraces yet) is running
> > OpenSSL and on 3.1.3; so it's not SSL library specific.
> >
> > I'm going to see if I can get a backtrace from the 3.1.3 box shortly.
> >
> > Andrew
> >
> > On Fri, 8 Oct 2021 at 17:13, Bogdan-Andrei Iancu  
> > wrote:
> >> Hi Andrew,
> >>
> >> OK, interesting progress here. So, the FIFO process blocks as it is
> >> trying to send an IPC 

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-08 Thread Andrew Yager
Hi Bogdan-Andrei,

Have restarted since the last bt, but have recreated again and
attached. Earlier today we did also get another bt full on the second
blocked pid, but I didn't save it. In that case it was a UDP reply
from one of our upstream servers that had gone through mid_registrar
and was being relayed to a TCP endpoint. The TCP endpoint did have an
open file descriptor we could see, and it had sent and was blocked on
receive at the same point (I'm getting better at reading backtraces!
:D).

The thing I do note is happening is that every example I have is a UDP
message being received from an upstream server being relayed to a
client on a TCP/TLS connection via a UDP worker.

While we are using WolfSSL in this box, the other box where we have
the same behaviour (but I haven't taken backtraces yet) is running
OpenSSL and on 3.1.3; so it's not SSL library specific.

I'm going to see if I can get a backtrace from the 3.1.3 box shortly.

Andrew

On Fri, 8 Oct 2021 at 17:13, Bogdan-Andrei Iancu  wrote:
>
> Hi Andrew,
>
> OK, interesting progress here. So, the FIFO process blocks as it is
> trying to send an IPC JOB to an UDP process which looks like also being
> blocked.
>
> Could you attach with GDB to the that UDP blocked process too ? (you
> have its PID in the printing of the pt[x] in first gdb)
>
> Regards,
>
> Bogdan-Andrei Iancu
>
> OpenSIPS Founder and Developer
>https://www.opensips-solutions.com
> OpenSIPS eBootcamp 2021
>https://opensips.org/training/OpenSIPS_eBootcamp_2021/
>
> On 10/8/21 1:43 AM, Andrew Yager wrote:
> > Interestingly, where I usually see a range of continued messages from
> > a process continually in the debug log, they appear to stop for this
> > PID at 3:47am, and that process seems blocked on a tcp/tls send:
> >
> >
>
(gdb) p pt[50]
$1 = {pid = 24103, type = TYPE_UDP, pg_filter = 0x7fb025689930, desc = "SIP 
receiver udp:114.141.98.21:5060", '\000' , flags = 100, 
ipc_pipe = {210, 211}, ipc_pipe_holder = {210, 211}, ipc_sync_pipe = {212, 
213}, ipc_sync_pipe_holder = {212, 213}, tcp_socks_holder = {646, 647}, 
unix_sock = 646, 
  log_level = 4, default_log_level = 4, load_rt = 0x7fafa819f590, load_1m = 
0x7fafa819f650, load_10m = 0x7fafa819f710, pkg_total = 0x7fafa81e4ba0, pkg_used 
= 0x7fafa81e4c50, pkg_rused = 0x7fafa81e3d68, pkg_mused = 0x7fafa81e3e20, 
pkg_free = 0x7fafa81e2f30, pkg_frags = 0x7fafa81e2fe0, load = {ST_window = 
{1000, 1000, 1000, 
  1000, 1000, 1000, 1000, 1000, 1000, 997, 1000 , 997, 
1000 , 997, 1000 , 997, 1000, 1000, 1000, 
1000, 924, 0 , 390, 1000 , 996, 1000 
, 997, 1000 , 997, 1000 , 995, 
  1000 , 996, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 
1000, 1000, 1000, 997, 1000 , 997, 1000 , 
998, 1000 , 997, 1000 , 997, 1000 , 998, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 
998, 
  1000 ...}, LT_window = {0 , 557337, 
109912, 0 }, last_time = 1633680121109927, is_busy = 1 
'\001'}}
(gdb) q

root@hvprxy:/var/log# gdb opensips 24103
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from opensips...

[snip]

(gdb) bt full
#0  0x7fb035888057 in __libc_recvmsg (fd=fd@entry=647, 
msg=msg@entry=0x7ffc2e266910, flags=flags@entry=256) at 
../sysdeps/unix/sysv/linux/recvmsg.c:28
resultvar = 18446744073709551104
__arg3 = 
_a2 = 
sc_ret = 
__arg1 = 
_a3 = 
sc_cancel_oldtype = 
resultvar = 
resultvar = 
__arg2 = 
_a1 = 
#1  0x55e994af9fac in receive_fd (unix_socket=unix_socket@entry=647, 
data=data@entry=0x7ffc2e266a10, data_len=data_len@entry=8, 
fd=fd@entry=0x7ffc2e266a0c, flags=flags@entry=256) at net/tcp_passfd.c:213
msg = {msg_name = 0x0, msg_namelen = 0, msg_iov = 0x7ffc2e266950, 
msg_iovlen = 1, msg_control = 0x7ffc2e266960, msg_controllen = 24, msg_flags = 
0}
iov = {{iov_base = 0x7ffc2e266a10, iov_len = 8}}
new_fd = 
ret = 
n = 
cmsg = 
control_un = {cm = {cmsg_len = 0, cmsg_level = 222, cmsg_type = 0, 
__cmsg_data = 0x7ffc2e266970 "\220J\214%\260\177"}, control = 
"\000\000

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-07 Thread Andrew Yager
 = 0x0
ip = {af = 2, len = 4, u = {addrl = {9385250162, 0}, addr32 =
{795315570, 2, 0, 0}, addr16 = {36210, 12135, 2, 0, 0, 0, 0, 0},
addr = "r\215g/\002\000\000\000\000\000\000\000\000\000\000"}}
port = 
fd = 21943
n = 
rlen = 
__FUNCTION__ = "proto_tls_send"
#4  0x7f6ec5ccd3af in msg_send (msg=0x0, len=,
buf=0x7f6fc6d85428 "SIP/2.0 200 OK\r\nv: SIP/2.0/TLS
192.168.86.60:50375;rport=50375;[snip]"..., id=,
to=0x7f6ed4bd8350, proto=3, send_sock=0x7f6fc6b4a420) at
../../forward.h:123
out_buff = {
  s = 0x7f6fc6d85428 "SIP/2.0 200 OK\r\nv: SIP/2.0/TLS
192.168.86.60:50375;rport=50375;received=[snip]"..., len = 483}
port = 
ip = 
out_buff = 
port = 
ip = 
__FUNCTION__ = "msg_send"
__ip = 
#5  send_pr_buffer (rb=rb@entry=0x7f6ed4bd8330,
buf=buf@entry=0x7f6fc6d85428, len=, ctx=ctx@entry=0x0)
at t_funcs.c:68
--Type  for more, q to quit, c to continue without paging--
__FUNCTION__ = "send_pr_buffer"
#6  0x7f6ec5cc3eef in relay_reply (t=0x7f6ed4bd8230,
p_msg=, branch=, msg_status=, cancel_bitmap=0x7ffed7680144)
at t_reply.c:1364
relay = 0
save_clone = 0
buf = 0x7f6fc6d85428 "SIP/2.0 200 OK\r\nv: SIP/2.0/TLS
192.168.86.60:50375;rport=50375;received=[snip]"...
res_len = 483
relayed_code = 200
relayed_msg = 0x7f6fc6d83c00
bm = {to_tag_val = {s = 0x55b7abfd9808  "", len =
-976256698}}
totag_retr = 0
reply_status = RPS_COMPLETED
    uas_rb = 0x7f6ed4bd8330
cb_s = {s = 0x7f6fc6d83c00 "\f", len = -681049792}
text = {s = 0x55b7abec1e50  "ȬB\311n\177", len = 0}
__FUNCTION__ = "relay_reply"
[snip]

--
Andrew Yager, CEO (BCompSc, JNCIS-SP, MACS (Snr) CP)
business nbn™ advisor (advisor 01783150)
Real World Technology Solutions - IT People you can trust
Voice | Data | IT Procurement | Managed IT
rwts.com.au | 1300 798 718


Real World is a DellEMC Gold Partner

This document should be read only by those persons to whom it is
addressed and its content is not intended for use by any other
persons. If you have received this message in error, please notify us
immediately. Please also destroy and delete the message from your
computer. Any unauthorised form of reproduction of this message is
strictly prohibited. We are not liable for the proper and complete
transmission of the information contained in this communication, nor
for any delay in its receipt. Please consider the environment before
printing this e-mail.


On Fri, 8 Oct 2021 at 09:34, Andrew Yager  wrote:
>
> Hi,
>
> Procs had restarted; but repeated the exercise and it's pid 48 this time.
>
> (gdb) bt full
> #0  0x7f6fe6d48297 in __libc_write (fd=fd@entry=203,
> buf=buf@entry=0x7ffed7680050, nbytes=nbytes@entry=24) at
> ../sysdeps/unix/sysv/linux/write.c:26
> resultvar = 18446744073709551104
> __arg3 = 
> _a2 = 
> sc_ret = 
> __arg1 = 
> _a3 = 
> sc_cancel_oldtype = 
> resultvar = 
> resultvar = 
> __arg2 = 
> _a1 = 
> #1  0x55b7abca8d33 in __ipc_send_job (payload2=0x0,
> payload1=0x55b7abc6e760 , type=0, fd=203) at
> ipc.c:171
> job = {snd_proc = 1, handler_type = 0, payload1 =
> 0x55b7abc6e760 , payload2 = 0x0}
> n = 
> job = 
> n = 
> __FUNCTION__ = "__ipc_send_job"
> #2  ipc_send_rpc (dst_proc=dst_proc@entry=48,
> rpc=rpc@entry=0x55b7abc6e760 ,
> param=param@entry=0x0) at ipc.c:194
> [snip]
> ---
>
> (gdb) p pt[48]
> $1 = {pid = 2639896, type = TYPE_UDP, pg_filter = 0x7f6fc6b4a930, desc
> = "SIP receiver udp:1.1.1.1:5060", '\000' , flags =
> 228, ipc_pipe = {202,
> 203}, ipc_pipe_holder = {202, 203}, ipc_sync_pipe = {204, 205},
> ipc_sync_pipe_holder = {204, 205}, tcp_socks_holder = {642, 643},
> unix_sock = 642, log_level = 4,
>   default_log_level = 4, load_rt = 0x7f6ec9493f10, load_1m =
> 0x7f6ec9493fd0, load_10m = 0x7f6ec9494090, pkg_total = 0x7f6ec94d2048,
> pkg_used = 0x7f6ec94d20f8,
>   pkg_rused = 0x7f6ec94d3b80, pkg_mused = 0x7f6ec94d3c38, pkg_free =
> 0x7f6ec94d2e00, pkg_frags = 0x7f6ec94d8050, load = {ST_window = {0
> , 509,
>   1000 , 996, 1000 , 996,
> 1000 , 997, 1000, 1000, 1000, 1000, 1000, 1000,
> 1000, 1000, 997,
>   1000 , 996, 1000 , 997, 1000
> , 997, 1000, 1000, 1000, 1000, 1000, 997, 1000
> , 967, 0, 0, 0,
>   0, 0, 255, 151, 0, 0, 0, 0, 0, 0, 0, 0, 527, 1000, 906, 0
> }, LT_window = {0 , 301288, 0
> },
> last_time = 1633625259739989, is_busy = 1 '\001'}}
>
> Andrew
>
> --
> Andrew Yager, CEO (BCompSc, JNCIS-SP, MACS (Snr) CP)
> b

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-07 Thread Andrew Yager
Hi,

Procs had restarted; but repeated the exercise and it's pid 48 this time.

(gdb) bt full
#0  0x7f6fe6d48297 in __libc_write (fd=fd@entry=203,
buf=buf@entry=0x7ffed7680050, nbytes=nbytes@entry=24) at
../sysdeps/unix/sysv/linux/write.c:26
resultvar = 18446744073709551104
__arg3 = 
_a2 = 
sc_ret = 
__arg1 = 
_a3 = 
sc_cancel_oldtype = 
resultvar = 
resultvar = 
__arg2 = 
_a1 = 
#1  0x55b7abca8d33 in __ipc_send_job (payload2=0x0,
payload1=0x55b7abc6e760 , type=0, fd=203) at
ipc.c:171
job = {snd_proc = 1, handler_type = 0, payload1 =
0x55b7abc6e760 , payload2 = 0x0}
n = 
job = 
n = 
__FUNCTION__ = "__ipc_send_job"
#2  ipc_send_rpc (dst_proc=dst_proc@entry=48,
rpc=rpc@entry=0x55b7abc6e760 ,
param=param@entry=0x0) at ipc.c:194
[snip]
---

(gdb) p pt[48]
$1 = {pid = 2639896, type = TYPE_UDP, pg_filter = 0x7f6fc6b4a930, desc
= "SIP receiver udp:1.1.1.1:5060", '\000' , flags =
228, ipc_pipe = {202,
203}, ipc_pipe_holder = {202, 203}, ipc_sync_pipe = {204, 205},
ipc_sync_pipe_holder = {204, 205}, tcp_socks_holder = {642, 643},
unix_sock = 642, log_level = 4,
  default_log_level = 4, load_rt = 0x7f6ec9493f10, load_1m =
0x7f6ec9493fd0, load_10m = 0x7f6ec9494090, pkg_total = 0x7f6ec94d2048,
pkg_used = 0x7f6ec94d20f8,
  pkg_rused = 0x7f6ec94d3b80, pkg_mused = 0x7f6ec94d3c38, pkg_free =
0x7f6ec94d2e00, pkg_frags = 0x7f6ec94d8050, load = {ST_window = {0
, 509,
  1000 , 996, 1000 , 996,
1000 , 997, 1000, 1000, 1000, 1000, 1000, 1000,
1000, 1000, 997,
  1000 , 996, 1000 , 997, 1000
, 997, 1000, 1000, 1000, 1000, 1000, 997, 1000
, 967, 0, 0, 0,
  0, 0, 255, 151, 0, 0, 0, 0, 0, 0, 0, 0, 527, 1000, 906, 0
}, LT_window = {0 , 301288, 0
},
last_time = 1633625259739989, is_busy = 1 '\001'}}

Andrew

--
Andrew Yager, CEO (BCompSc, JNCIS-SP, MACS (Snr) CP)
business nbn™ advisor (advisor 01783150)
Real World Technology Solutions - IT People you can trust
Voice | Data | IT Procurement | Managed IT
rwts.com.au | 1300 798 718


Real World is a DellEMC Gold Partner

This document should be read only by those persons to whom it is
addressed and its content is not intended for use by any other
persons. If you have received this message in error, please notify us
immediately. Please also destroy and delete the message from your
computer. Any unauthorised form of reproduction of this message is
strictly prohibited. We are not liable for the proper and complete
transmission of the information contained in this communication, nor
for any delay in its receipt. Please consider the environment before
printing this e-mail.


On Fri, 8 Oct 2021 at 01:15, Bogdan-Andrei Iancu  wrote:
>
> OK, getting closer :). Let's see who is the proc ID 46.
>
> In GDB, just do
>
> > p pt[46]
>
> Regards,
>
> Bogdan-Andrei Iancu
>
> OpenSIPS Founder and Developer
>   https://www.opensips-solutions.com
> OpenSIPS eBootcamp 2021
>   https://opensips.org/training/OpenSIPS_eBootcamp_2021/
>
> On 10/7/21 3:37 PM, Andrew Yager wrote:
>
> Hi Bogdan-Andrei,
>
> OK; my non expert opinion is that these two lines in ipc.c may be a pointer 
> to a "symptom" of the problem…
>
> // FIXME - we should check if the destination process really listens
> // for read, otherwise we may end up filling in the pipe and block
>
> but, I really want to know why it's blocking on the IPC comms before that.
>
> Here's the BT.
>
> (gdb) bt full
> #0  0x7f1e4ce9b297 in __libc_write (fd=fd@entry=195, 
> buf=buf@entry=0x7fffc86261b0, nbytes=nbytes@entry=24) at 
> ../sysdeps/unix/sysv/linux/write.c:26
> resultvar = 18446744073709551104
> __arg3 = 
> _a2 = 
> sc_ret = 
> __arg1 = 
> _a3 = 
> sc_cancel_oldtype = 
> resultvar = 
> resultvar = 
> __arg2 = 
> _a1 = 
> #1  0x55ddea5cad33 in __ipc_send_job (payload2=0x0, 
> payload1=0x55ddea590760 , type=0, fd=195) at ipc.c:171
> job = {snd_proc = 1, handler_type = 0, payload1 = 0x55ddea590760 
> , payload2 = 0x0}
> n = 
> job = 
> n = 
> __FUNCTION__ = "__ipc_send_job"
> #2  ipc_send_rpc (dst_proc=dst_proc@entry=46, rpc=rpc@entry=0x55ddea590760 
> , param=param@entry=0x0) at ipc.c:194
>
>

___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-06 Thread Andrew Yager
Hi Bogdan-Andrei,

This does look to be frozen in this state too. I guess GDB is the next step.

Andrew


On Wed, 6 Oct 2021 at 17:12, Bogdan-Andrei Iancu 
wrote:

> Hi Andrew,
>
> What you can do is, after getting the FIFO blocked, to do a trap
> "opensips-cli trap" to try to see what the FIFO process is doing (or trying
> to do). If the trap does not work, let me know, I will give you the
> instructions on how to directly attache with GDB to the process.
>
> Best regards,
>
> Bogdan-Andrei Iancu
>
> OpenSIPS Founder and Developer
>   https://www.opensips-solutions.com
> OpenSIPS eBootcamp 2021
>   https://opensips.org/training/OpenSIPS_eBootcamp_2021/
>
> On 10/6/21 12:24 AM, Andrew Yager wrote:
>
> Also restarting opensips produces this:
>
> Restarting opensips (via systemctl): opensips.serviceERROR: communication
> exception for 'which' returned: cannot access fifo file /tmp/opensips_fifo:
> [Errno 32] Broken pipe!
>
> Andrew
>
>
> On Wed, 6 Oct 2021 at 08:23, Andrew Yager  wrote:
>
>> Hi,
>>
>> Just replicated this on the 3.2.2 nightly build. Debug logs are enabled,
>> and this is all I can see on the last request that stalled:
>>
>> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
>> running command [{"jsonrpc": "2.0", "id": "5319", "method": "which",
>> "params": []}]
>> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback: got
>> mi response = [0x558865cc9560]
>> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
>> running command [{"jsonrpc": "2.0", "id": "8377", "method":
>> "get_statistics", "params": [["all"]]}]
>> Oct  6 04:49:32 hvprxy osips[1186133]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186134]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186135]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186136]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186137]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186138]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186139]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186140]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186141]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186142]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186143]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186144]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186145]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186146]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186147]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186148]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186149]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186150]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186151]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186152]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186153]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186154]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186155]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186156]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04:49:32 hvprxy osips[1186157]: DBG:core:ipc_handle_job: received
>> job type 0[RPC] from process 1
>> Oct  6 04

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-05 Thread Andrew Yager
Also restarting opensips produces this:

Restarting opensips (via systemctl): opensips.serviceERROR: communication
exception for 'which' returned: cannot access fifo file /tmp/opensips_fifo:
[Errno 32] Broken pipe!

Andrew


On Wed, 6 Oct 2021 at 08:23, Andrew Yager  wrote:

> Hi,
>
> Just replicated this on the 3.2.2 nightly build. Debug logs are enabled,
> and this is all I can see on the last request that stalled:
>
> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
> running command [{"jsonrpc": "2.0", "id": "5319", "method": "which",
> "params": []}]
> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback: got
> mi response = [0x558865cc9560]
> Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
> running command [{"jsonrpc": "2.0", "id": "8377", "method":
> "get_statistics", "params": [["all"]]}]
> Oct  6 04:49:32 hvprxy osips[1186133]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186134]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186135]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186136]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186137]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186138]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186139]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186140]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186141]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186142]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186143]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186144]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186145]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186146]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186147]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186148]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186149]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186150]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186151]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186152]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186153]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186154]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186155]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186156]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186157]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186158]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186159]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186160]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186161]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186162]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186163]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186164]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186165]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186166]: DBG:core:ipc_handle_job: received
> job type 0[RPC] from process 1
> Oct  6 04:49:32 hvprxy osips[1186167]: DBG:core:ipc_handle

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-05 Thread Andrew Yager
Hi,

Just replicated this on the 3.2.2 nightly build. Debug logs are enabled,
and this is all I can see on the last request that stalled:

Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
running command [{"jsonrpc": "2.0", "id": "5319", "method": "which",
"params": []}]
Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback: got mi
response = [0x558865cc9560]
Oct  6 04:49:32 hvprxy osips[1186130]: DBG:mi_fifo:mi_fifo_callback:
running command [{"jsonrpc": "2.0", "id": "8377", "method":
"get_statistics", "params": [["all"]]}]
Oct  6 04:49:32 hvprxy osips[1186133]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186134]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186135]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186136]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186137]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186138]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186139]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186140]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186141]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186142]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186143]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186144]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186145]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186146]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186147]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186148]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186149]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186150]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186151]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186152]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186153]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186154]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186155]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186156]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186157]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186158]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186159]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186160]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186161]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186162]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186163]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186164]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186165]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186166]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186167]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186168]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186169]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186170]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186171]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186172]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:49:32 hvprxy osips[1186173]: DBG:core:ipc_handle_job: received
job type 0[RPC] from process 1
Oct  6 04:4

Re: [OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-04 Thread Andrew Yager
Just further to this, I think it's the ul_dump command that seems to cause
the issue first.

Andrew


On Sat, 2 Oct 2021 at 13:05, Andrew Yager  wrote:

> Hi,
>
> Not entirely sure where to start digging on this one. On 3.1.3 we've had
> an issue appear "suddenly" whereby our mi command output seems to "block"
> and not return any useful data. Restarting opensips processes restores
> comms.
>
> We end up with a huge number of opensips_fifo_reply_\* files in the /tmp
> directory, but effectively no data is ever written and opensips-cli freezes.
>
> We've restarted twice to "resolve" now, but the issue appears to reoccur.
> We're using the mi get_statistics command to monitor stats/health and
> uldump to pull some detail about usrloc data every 5 minutes.
>
> I don't think I can cause a debug level increase because the MI process
> doesn't seem to be communicating (at least it doesn't seem to work).
>
> opensips 3.1.3-1
> opensips-cli 0.1~20210707~572d2db-
> 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64
> x86_64 GNU/Linux
>
> I can restart again, but if this is going to keep happening, I want to
> make sure I have something useful in the logs to be able to trace down.
>
> Andrew
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] mi_fifo lock on reply after a period of time in 3.1.3

2021-10-01 Thread Andrew Yager
Hi,

Not entirely sure where to start digging on this one. On 3.1.3 we've had an
issue appear "suddenly" whereby our mi command output seems to "block" and
not return any useful data. Restarting opensips processes restores comms.

We end up with a huge number of opensips_fifo_reply_\* files in the /tmp
directory, but effectively no data is ever written and opensips-cli freezes.

We've restarted twice to "resolve" now, but the issue appears to reoccur.
We're using the mi get_statistics command to monitor stats/health and
uldump to pull some detail about usrloc data every 5 minutes.

I don't think I can cause a debug level increase because the MI process
doesn't seem to be communicating (at least it doesn't seem to work).

opensips 3.1.3-1
opensips-cli 0.1~20210707~572d2db-
5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64
x86_64 GNU/Linux

I can restart again, but if this is going to keep happening, I want to make
sure I have something useful in the logs to be able to trace down.

Andrew
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] Segfault in 3.2.2 (inc nightly) but not 3.2 in mid_registrar

2021-09-28 Thread Andrew Yager
No worries; glad the detective work born out of frustration helped ;) (and 
thanks to my colleague Mark V who is lurking around here for his work in 
getting to this point!)

We’ll give it a test in the next nightly build!

A

From: Liviu Chircu 
Sent: Tuesday, September 28, 2021 9:56:43 PM
To: OpenSIPS users mailling list ; Andrew Yager 

Subject: Re: [OpenSIPS-Users] Segfault in 3.2.2 (inc nightly) but not 3.2 in 
mid_registrar

On 28.09.2021 14:29, Liviu Chircu wrote:
> Will put up a fix soon, thanks again for the nice hint!

Done, see:
https://github.com/OpenSIPS/opensips/commit/6765b7d95b99626ca81986561f3d7851b3e90c8f

--
Liviu Chircu
www.twitter.com/liviuchircu<http://www.twitter.com/liviuchircu> | 
www.opensips-solutions.com<http://www.opensips-solutions.com>

___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] Segfault in 3.2.2 (inc nightly) but not 3.2 in mid_registrar

2021-09-28 Thread Andrew Yager
Just further to this, I'm pretty sure this is a regression introduced in
https://github.com/OpenSIPS/opensips/commit/c11f92698c6f345d8921d645177f71aa36c9791d
.

On Tue, 28 Sept 2021 at 19:04, Andrew Yager  wrote:

> Hi,
>
> Since testing 3.2.2 we've noticed a segfault affecting registrations
> pretty regularly through mid_registrar.
>
> The issue occurs intermittently, but seemingly when the contact
> doesn't exist and needs to be created. We are currently using
> in-memory mid_registrar without DB backing to test; although we have
> previously had DB backing.
>
> We can "more commonly" trigger this with Zoiper as the client; but we
> can't see any specific reason. Have checked the github issues and
> can't see any particular reason why this would be happening in 3.2.2
> but not 3.2.0.
>
> Host OS is Ubuntu 20.04.
>
> We're seeing the following trace from the info log.
>
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:parse_headers:
> flags=
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:parse_params:
> Parsing params for:[expires=3599]
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:mid_registrar:save_restore_req_contacts: saving + restoring all
> contact URIs ...
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:mid_registrar:calc_contact_expires: expires: 3599
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str domain='location'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str aor='1000'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:get_dummy_sip_msg:
> reusing the static sip msg 0x7f343336a7c8
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:core:release_dummy_sip_msg: cleaning the static sip msg
> 0x7f343336a7c8
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:destroy_avp_list:
> destroying list (nil)
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:usrloc:run_ul_callbacks: contact=0x7f3431eb4598, callback type
> 16/208, id 1 entered
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:mid_registrar:mid_reg_aor_event: AOR callback (16): contact='1000'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:mid_registrar:save_restore_req_contacts: INSERTING contact with
> expires 1632819655
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str domain='location'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str aor='1000'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str uri='sip:1000@1.1.1.1:53149;rinstance=718be353c21105be;transport=tls'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str received=''
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str path=''
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> int qval=-1
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str user_agent='Z 5.5.5 v2.10.15.2'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str socket='tls:2.2.2.2:5061'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> int bflags=14
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> int expires=1632819655
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str callid='c77MPaFAE7pAVYD5R-s1-w..'
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> int cseq=2
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str attr=''
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> int latency=0
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
> str shtag=''
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:get_dummy_sip_msg:
> reusing the static sip msg 0x7f343336a7c8
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:core:release_dummy_sip_msg: cleaning the static sip msg
> 0x7f343336a7c8
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:destroy_avp_list:
> destroying list (nil)
> Sep 28 18:50:55 hvprxy-dev osips[1134252]: CRITICAL:core:sig_usr:
> segfault in process pid: 1134252, id: 4
> Sep 28 18:50:55 hvprxy-dev osips[1134252]:
> DBG:core:restore_segv_handler: restoring SIGSEGV handler...
>
> BT says:
>
> Core was generated by `/usr/sbin/opensips -P
> /run/opensips/opensips.pid -f /etc/opensips/opensips.cfg'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  __memmove_avx_unaligned_erms () at
> ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:287
> 287 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such
> file or directory.
> (gdb) bt
> #0  __memmove_avx_unaligned_erms () a

[OpenSIPS-Users] Segfault in 3.2.2 (inc nightly) but not 3.2 in mid_registrar

2021-09-28 Thread Andrew Yager
Hi,

Since testing 3.2.2 we've noticed a segfault affecting registrations
pretty regularly through mid_registrar.

The issue occurs intermittently, but seemingly when the contact
doesn't exist and needs to be created. We are currently using
in-memory mid_registrar without DB backing to test; although we have
previously had DB backing.

We can "more commonly" trigger this with Zoiper as the client; but we
can't see any specific reason. Have checked the github issues and
can't see any particular reason why this would be happening in 3.2.2
but not 3.2.0.

Host OS is Ubuntu 20.04.

We're seeing the following trace from the info log.

Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:parse_headers:
flags=
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:parse_params:
Parsing params for:[expires=3599]
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:mid_registrar:save_restore_req_contacts: saving + restoring all
contact URIs ...
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:mid_registrar:calc_contact_expires: expires: 3599
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str domain='location'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str aor='1000'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:get_dummy_sip_msg:
reusing the static sip msg 0x7f343336a7c8
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:core:release_dummy_sip_msg: cleaning the static sip msg
0x7f343336a7c8
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:destroy_avp_list:
destroying list (nil)
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:usrloc:run_ul_callbacks: contact=0x7f3431eb4598, callback type
16/208, id 1 entered
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:mid_registrar:mid_reg_aor_event: AOR callback (16): contact='1000'
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:mid_registrar:save_restore_req_contacts: INSERTING contact with
expires 1632819655
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str domain='location'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str aor='1000'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str uri='sip:1000@1.1.1.1:53149;rinstance=718be353c21105be;transport=tls'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str received=''
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str path=''
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
int qval=-1
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str user_agent='Z 5.5.5 v2.10.15.2'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str socket='tls:2.2.2.2:5061'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
int bflags=14
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
int expires=1632819655
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str callid='c77MPaFAE7pAVYD5R-s1-w..'
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
int cseq=2
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str attr=''
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
int latency=0
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:evi_param_set: set
str shtag=''
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:get_dummy_sip_msg:
reusing the static sip msg 0x7f343336a7c8
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:core:release_dummy_sip_msg: cleaning the static sip msg
0x7f343336a7c8
Sep 28 18:50:55 hvprxy-dev osips[1134252]: DBG:core:destroy_avp_list:
destroying list (nil)
Sep 28 18:50:55 hvprxy-dev osips[1134252]: CRITICAL:core:sig_usr:
segfault in process pid: 1134252, id: 4
Sep 28 18:50:55 hvprxy-dev osips[1134252]:
DBG:core:restore_segv_handler: restoring SIGSEGV handler...

BT says:

Core was generated by `/usr/sbin/opensips -P
/run/opensips/opensips.pid -f /etc/opensips/opensips.cfg'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memmove_avx_unaligned_erms () at
../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:287
287 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such
file or directory.
(gdb) bt
#0  __memmove_avx_unaligned_erms () at
../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:287
#1  0x7fcd40357065 in memcpy (__len=,
__src=, __dest=) at
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
#2  kv_put (_store=, _key=,
_val=0x7fff9752f750) at kv_store.c:87
#3  0x7fcd3dfc11e5 in store_ucontact_data (c=0x7fcd4434f458,
mri=0x7fcd4585fb80 , ct_uri=0x6152d9d6, expires=, expires_out=21925, last_reg_ts=94, last_cseq=0) at
ul_storage.c:201
#4  0x7fcd3dfc7349 in mid_reg_store_ct_data (c=,
info=) at save.c:243
#5  0x7fcd4034adf1 in insert_ucontact (_r=0x7fcd43b2f768,
_contact=0x7fcd44020a48, _ci=0x7fcd3dff4060 ,
match=0x7fcd44352e30, skip_replication=,
_c=0x7fff9752f8e0) at urecord.c:876
#6  0x7fcd3dfd34ef in 

Re: [OpenSIPS-Users] bla_presentity_spec variable seems to not work?

2021-05-09 Thread Andrew Yager
I think I had bridge line appearances and BLF state confused, but
regardless the correct way to do what I was trying to do was to update the
$ru variable. This updates the presentity_uri value as I expected, and
handsets can work correctly.

Hope this helps someone else trying to find this in the future.

Thanks,
Andrew

On Thu, 6 May 2021 at 11:01, Andrew Yager  wrote:

> Hi,
>
> Hoping someone can give me a pointer on using the bla_presentity_spec
> setting. I can't seem to get it to have any affect on the subscription
> as it's reported in the watchers table.
>
> loadmodule "presence.so"
> modparam("presence", "db_url",
> "mysql://opensips:opensipsrw@127.0.0.1:3306/opensips")
> modparam("presence", "fallback2db", 1)
> modparam("presence", "cluster_pres_events" ,"presence, dialog;sla,
> message-summary")
> modparam("presence", "bla_presentity_spec", "$var(bla_pres)")
>
> loadmodule "presence_dialoginfo.so"
> loadmodule "presence_mwi.so"
>
> route {
> if(is_method("SUBSCRIBE"))
> {
> xlog("SUBSCRIBE from $si to $ru");
>$var(bla_pres)="sip:f...@bar.com"
> handle_subscribe();
> exit;
> }
> }
>
> … but the active_watcher table has no record anywhere of "f...@bar.com"
>
> What I was expecting to be able to do was to modify the presentity_uri
> or to_user or to_domain, but none of these seem to happen.
>
> Running 3.1.1-1 apt package on Ubuntu 20 for this test.
>
> Thanks,
> Andrew
>
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] bla_presentity_spec variable seems to not work?

2021-05-05 Thread Andrew Yager
Hi,

Hoping someone can give me a pointer on using the bla_presentity_spec
setting. I can't seem to get it to have any affect on the subscription
as it's reported in the watchers table.

loadmodule "presence.so"
modparam("presence", "db_url",
"mysql://opensips:opensipsrw@127.0.0.1:3306/opensips")
modparam("presence", "fallback2db", 1)
modparam("presence", "cluster_pres_events" ,"presence, dialog;sla,
message-summary")
modparam("presence", "bla_presentity_spec", "$var(bla_pres)")

loadmodule "presence_dialoginfo.so"
loadmodule "presence_mwi.so"

route {
if(is_method("SUBSCRIBE"))
{
xlog("SUBSCRIBE from $si to $ru");
   $var(bla_pres)="sip:f...@bar.com"
handle_subscribe();
exit;
}
}

… but the active_watcher table has no record anywhere of "f...@bar.com"

What I was expecting to be able to do was to modify the presentity_uri
or to_user or to_domain, but none of these seem to happen.

Running 3.1.1-1 apt package on Ubuntu 20 for this test.

Thanks,
Andrew

___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] TLS NOTIFY transmit fails

2021-05-03 Thread Andrew Yager
I believe I have resolved this issue, with one of…

* Made sure that NOTIFY and SUBSCRIBE messages were executing record_route.
It looks like some cases would bypass this in the logic, and this is
possibly cause for the confusion
* Setting the branch flag on the pre-branch options
* Removing the stateless replies (as they are now being forwarded)

I suspect in multi-contact scenarios this will currently break, but we're
prepared to accept this I think.

Thanks,
Andrew
___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] TLS NOTIFY transmit fails

2021-05-01 Thread Andrew Yager
Hi All,

Hoping someone can help me with this - I think it should be pretty simple.

We're using mid_registrar to provide a Proxy between Asterisk and clients
(mixed UDP, TCP and TLS).

Calls in/out work fine, but I'm having some difficulties with NOTIFY
messages being transmitted back to the client a "period" of time after the
register event. This period of time seems to be 2 minutes or so. It looks
like even though the $ru is set correctly, it's not reusing the TLS socket,
but instead trying to create a new transaction/socket which doesn't work.
I'm assuming from the logs the important thing is the
"DBG:tm:t_lookup_request: no transaction found"; and that what has happened
is the original register transaction has "expired" and so it needs to
create a new one, despite the TCP_NO_CONNECT bflag being set.

INVITEs are properly relayed and reuse the socket without fail; it "seems"
it's only NOTIFY messages that are causing me grief.

Our config looks like this:

...

tcp_no_new_conn_bflag = TCP_NO_CONNECT
tcp_keepidle = 30
...
loadmodule "tm.so"
modparam("tm", "fr_timeout", 5)
modparam("tm", "fr_inv_timeout", 30)
modparam("tm", "restart_fr_on_each_reply", 0)
modparam("tm", "onreply_avp_mode", 1)

loadmodule "mid_registrar.so"
modparam("mid_registrar", "mode", 2) /* 0 = mirror / 1 = ct / 2 = AoR */
modparam("mid_registrar", "outgoing_expires", 3600)
modparam("mid_registrar", "contact_id_insertion", "ct-param")
modparam("mid_registrar", "tcp_persistent_flag",
"TCP_PERSIST_REGISTRATIONS")
...

if ($si == "asterisk.ip" && $sp == 5060) {
xlog("about to do mid_registrar_lookup\n");
if (!mid_registrar_lookup("location")) {
xlog("mid registrar lookup failed\n");
xlog("RU is $ru FU is $fu TU is $tu\n");
$ru = "sip:"+$tU+"@asterisk.ip";
xlog("Rewrote to $ru\n"); # This deal with NOTIFY
requests that don't have the user in the SIP message
if (!mid_registrar_lookup("location")) {
xlog("Could not find user, going to reply 404 and
drop the request.\n");
sl_send_reply(404,"Contact is not here anymore");

exit;
}
}
} else {
fix_nated_contact();
$ru = "sip:"+$tU+"@asterisk.ip:5060";
}
if ( is_method("NOTIFY")) {
setbflag("TCP_NO_CONNECT");
xlog("Notify with to tag\n");
xlog("Ru is $ru\n");
sl_send_reply(200, "OK, we got this!");
t_relay();
exit;
}

...

Debug Logs:

May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: about to do
mid_registrar_lookup
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:unescape_user:
unescaped string is <>
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:mid_registrar:lookup:
'' Not found in usrloc
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: mid registrar lookup
failed
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:parse_to_param:
tag=f7a75172-f02b-419e-89a0-ad2481768e88
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:_parse_to: end
of header reached, state=29
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:_parse_to:
display={}, ruri={sip:1014@x.x.x.x}
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: RU is
sip:y,y,y,y,y:5061;transport=tls;lr;r2=on;ftag=2210321712 FU is
sip:1014@x.x.x.x TU is sip:1003-test@x.x.x.x
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: Rewrote to
sip:1003-test@x.x.x.x
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:unescape_user:
unescaped string is <1003-test>
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]:
DBG:mid_registrar:select_contacts: ct: sip:1003-test@z.z.z.z
:3353;transport=TLS
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]:
DBG:mid_registrar:push_branch: setting msg R-URI 
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: Notify with to tag
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: Ru is
sip:1003-test@z.z.z.z:3353;transport=TLS
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:parse_headers:
flags=
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:tm:t_newtran:
transaction on entrance=0x
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:parse_headers:
flags=
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:parse_headers:
flags=78
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:tm:t_lookup_request:
start searching: hash=65460, isACK=0
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:tm:matching_3261:
RFC3261 transaction matching failed
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:tm:t_lookup_request:
no transaction found
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: DBG:core:mk_proxy: doing
DNS lookup...
May  2 04:59:42 hvprxy /usr/sbin/opensips[96845]: 

[OpenSIPS-Users] Hold and Transfer from Mitel PBX drops call

2009-07-16 Thread Andrew Yager
Hi,

This is more of a has anyone else had this problem and fixed it type  
call.

Our setup has OpenSIPS 1.4, B2BUA and RTPProxy.

Our users on Mitel PBX systems can place and receive calls without a  
problem. However, whenever they perform a mid call task, e.g., place  
the call on hold or attempt to transfer, the call simply drops.

According to the SIP messaging, it looks as though we simply receive a  
BYE message.

The other thing I would appreciate is any advise on where to look to  
trouble-shoot this issue.

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd, your IT department
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/


___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] Restricting CallerID

2009-03-08 Thread Andrew Yager
Hi,

We have a number of customers which have large indial ranges we route  
to them via the alias/location module.

We would like to restrict their outbound callerid to be a number in  
their range, but keep this moderately administrable (i.e. database  
based).

Running Opensips 1.4 how might we go about this?

I think I would like to reject calls with invalid outbound caller id,  
but otherwise I could just rewrite it to be blank.

Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd  - your IT department
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/





___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


Re: [OpenSIPS-Users] Restricting CallerID

2009-03-08 Thread Andrew Yager
On 09/03/2009, at 4:39 PM, Alex Balashov wrote:

 I suggest setting an rpid credential for these users in the  
 subscriber table.  This can be applied when the outbound call from  
 the user is proxy-challenged.

Thanks. The reason why I don't like this solution is that (I think) it  
will restrict them to a single outbound callerid, where we would like  
them to be able to set any in their range.

Am I wrong in my assumption?

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd  - your IT department
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/





___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] opensips-cp cdrviewer

2008-11-26 Thread Andrew Yager
Hi,

I'm working with opensips-cp and trying to get the cdrviewer to work.

The acc table has the following structure:

CREATE TABLE acc (
 id INT(10) UNSIGNED AUTO_INCREMENT PRIMARY KEY NOT NULL,
 method VARCHAR(16) DEFAULT '' NOT NULL,
 from_tag VARCHAR(64) DEFAULT '' NOT NULL,
 to_tag VARCHAR(64) DEFAULT '' NOT NULL,
 callid VARCHAR(64) DEFAULT '' NOT NULL,
 sip_code VARCHAR(3) DEFAULT '' NOT NULL,
 sip_reason VARCHAR(32) DEFAULT '' NOT NULL,
 time DATETIME NOT NULL
) ENGINE=MyISAM;

but the stored procedure seems to require a whole lot of extra fields.

What is the table structure/standard db extra fields that should be  
passed to get this to work?

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/


___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] Multi-Leg accounting

2008-11-26 Thread Andrew Yager
Hi,

You may be able to tell from all of my recent posts, I'm playing with  
accounting. :-)

We're not supporting redirects or diversions at the moment at the  
Proxy Side - so I don't think this is relevant for us.

But typically, at what point (and what values) would you record/take  
for the various src_leg and dst_leg parameters?

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/


___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] Extract userid from SIP URI for radius accounting purposes

2008-11-25 Thread Andrew Yager
Hi,

Our radius application is having trouble parsing the full SIP URI for  
accounting purposes in the packets being sent to them. A simple  
solution would be to extract the user (read phone number) portion of  
the from and to fields, and pass this information to the Radius server.

Is there already an AVP that has this information, or alternatively a  
module we can use to extract this information and insert it into an AVP?

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd  - your IT department
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/





___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


[OpenSIPS-Users] auth_radius via non-digest methods

2008-11-20 Thread Andrew Yager
Hi,

We are working with an external RADIUS software package that doesn't  
seem to support RFC4542 for Digest Authentication.

Is there a way to implement RADIUS authentication within OpenSIPS to  
not send these fields, and stick to the standard PAP/CHAP  
authentication methods?

Thanks,
Andrew

--
Andrew Yager, Managing Director   (MACS BCompSc MCP)
Real World Technology Solutions Pty Ltd
ph: 1300 798 718 or (02) 9037 0500
fax: (02) 9037 0591 mob: 0405 152 568
http://www.rwts.com.au/ or http://www.stonebridgecomputing.com.au/




___
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users