Thank you for reply, on top dialog replication

getting those messages where bin was set listen actual lan interface not vip.


Jul 11 10:28:12 aitossbc01 /usr/sbin/opensips[5318]: WARNING:dialog:fetch_socket_info: non-local socket <udp:207.210.246.39:5060>...ignoring Jul 11 10:28:12 aitossbc01 /usr/sbin/opensips[5318]: ERROR:dialog:dlg_replicated_create: Replicated dialog doesn't match any listening sockets Jul 11 10:28:12 aitossbc01 /usr/sbin/opensips[5318]: ERROR:dialog:receive_dlg_repl: Failed to process a binary packet!

volga629

On Tue, Jul 10, 2018 at 11:17 AM, Pasan Meemaduma <pasan...@ymail.com> wrote:
HI Volga,

I haven't used mhomed=1 param, Apologies not sure how to help there.


On Tuesday, 10 July 2018, 4:51:12 PM GMT+5:30, <volga...@networklab.ca> wrote:


Hello Pasan,
The whole issue is mhomed=1 it can't determine correct socket or don't
have logic to work with vip addresses. I have to use on LAN side

force_send_socket(udp:vip1:5060);

But issue right now if one node goes down and vip is relocated to
another node, how to determine which socket to use to send call from
correct soure ip ?


volga629

On Tue, Jul 10, 2018 at 6:44 AM, Pasan Meemaduma via Users
<users@lists.opensips.org> wrote:
> Hi Volga,
>
> Its a very common question about non local address binding in linux.
> did you set following via sysctl ?
> net.ipv4.ip_nonlocal_bind=1
>
>
>
> On Monday, 9 July 2018, 11:20:04 PM GMT+5:30, volga...@networklab.ca
> <volga...@networklab.ca> wrote:
>
>
> Hello Everyone,
> Trying build test cluster active/active  with 3 vips for each node
> separate virtual ip with keepalived
> And opensips failing to bound vip ip for corresponding vm
> This is interface config with vip
>
> 4: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state
> UNKNOWN group default qlen 1000
>    link/ether 00:50:56:a2:d5:0c brd ff:ff:ff:ff:ff:ff
>    inet 10.100.104.4/28 brd 10.100.104.15 scope global ens37
>      valid_lft forever preferred_lft forever
>    inet 10.100.104.7/28 scope global secondary ens37
> ---> Keepalived VIP
>      valid_lft forever preferred_lft forever
>    inet6 fe80::b279:d4d6:45a1:6dd6/64 scope link
>      valid_lft forever preferred_lft forever
>    inet6 fe80::f9a7:9cc4:6a27:4cde/64 scope link dadfailed tentative
>      valid_lft forever preferred_lft forever
>
>
> Jul 9 12:34:42 sbc01 /usr/sbin/opensips[6019]: Forwarding REGISTER to
> main registrar ~> [<sip:100@68.113.217.123:5060>]
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:core:get_out_socket: no socket found
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:mid_registrar:uri2sock: no corresponding socket for af 2
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:mid_registrar:overwrite_req_contacts: failed to obtain next hop
> socket, ci=0_3830065743@192.168.1.13
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:mid_registrar:mid_reg_req_fwded: failed to overwrite Contact
> URIs
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:core:get_out_socket: no socket found
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:tm:update_uac_dst: failed to fwd to af 2, proto 1  (no
> corresponding listening socket)
> Jul  9 12:34:42 sbc01 /usr/sbin/opensips[6019]:
> ERROR:tm:t_forward_nonack: failure to add branches
>
>
> Listen
>
> listen=udp:10.100.104.7:5060 vip1
> listen=udp:10.100.104.8:5060 vip2
> listen=udp:10.100.104.9:5060 vip3
> listen=bin:10.100.104.7:5585 bin cluster
>
> Any idea what what will be possible cause ?
>
> volga629
>
>
> _______________________________________________
> Users mailing list
> Users@lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users



_______________________________________________
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users

Reply via email to