Re: [ovs-discuss] Segfault in tun_metadata_to_geneve__ while processing DNS requests from OVN controller

2018-07-27 Thread Ben Pfaff
On Fri, Jul 27, 2018 at 11:35:56AM +, Markus Blank-Burian wrote:
> Hello,
> 
> 
> 
> I am using with OpenVSwitch 2.9.2, controlled by a recent version of 
> networking-ovn and have problems with segmentation faults in 
> tun_metadata_to_geneve__ which are apparently caused by DNS requests 
> generated by the OVN controller. The .tab pointer in the flow variable is 
> NULL, as can be seen below. For completeness, I have included both the 
> contents of the flow metadata and the packet data. Last, there are a few log 
> entries with more of the packet processing data just before the segfault 
> (10.14.33.16, .17 are the tunnel endpoints, 10.14.30.75 is a virtual router 
> and 128.176.196.36 is a DNS server). To be sure, that the issue is caused by 
> the DNS requests, I disabled DNS functionality manually in networking-ovn and 
> removed all DNS entries in the NB database. Since then, I have observed no 
> more segfaults.

Thanks for the bug report.

Would you mind trying out the following patch?  I believe that it should
correctly initialize the NULL member.

--8<--cut here-->8--

From: Ben Pfaff 
Date: Fri, 27 Jul 2018 16:19:40 -0700
Subject: [PATCH] ofproto-dpif: Use ofproto tunnel metadata table for executing
 packet.

Reported-by: Markus Blank-Burian 
Reported-at: 
https://mail.openvswitch.org/pipermail/ovs-discuss/2018-July/047111.html
Signed-off-by: Ben Pfaff 
---
 ofproto/ofproto-dpif.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ofproto/ofproto-dpif.c b/ofproto/ofproto-dpif.c
index 3365d4185926..51ff809c4c99 100644
--- a/ofproto/ofproto-dpif.c
+++ b/ofproto/ofproto-dpif.c
@@ -4862,6 +4862,7 @@ nxt_resume(struct ofproto *ofproto_,
 
 struct flow headers;
 flow_extract(, );
+headers.tunnel.metadata.tab = ofproto_get_tun_tab(ofproto_);
 
 /* Execute the datapath actions on the packet. */
 struct dpif_execute execute = {
-- 
2.16.1
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN - MTU path discovery

2018-07-27 Thread Ben Pfaff
On Thu, Jul 12, 2018 at 04:03:33PM +0200, Miguel Angel Ajo Pelayo wrote:
> Is there any way to match packet_size > X on a flow?
> 
> How could we implement this?

OVS doesn't currently have a way to do that.  Adding such a feature
would require kernel changes.

You mentioned ICMP at one point.  It would be pretty easy to implement
such a feature specifically for ICMP to logical router IP addresses in
OVN, because we could just slow-path such traffic to ovn-controller
(maybe we already do?) and check the packet size there.  I don't know
whether there's value in that.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN - MTU path discovery

2018-07-27 Thread Han Zhou
On Fri, Jul 27, 2018 at 12:49 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

>
>
>
> On 24 July 2018 at 17:43:51, Han Zhou (zhou...@gmail.com) wrote:
>
>
>
> On Tue, Jul 24, 2018 at 8:26 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>>
>>
>>
>> On 24 July 2018 at 17:20:59, Han Zhou (zhou...@gmail.com) wrote:
>>
>>
>>
>> On Thu, Jul 12, 2018 at 7:03 AM, Miguel Angel Ajo Pelayo <
>> majop...@redhat.com> wrote:
>>
>>> I believe we need to emit ICMP / need to frag messages to have proper
>>> support
>>> on different MTUs (on router sides), I wonder how does it work the other
>>> way
>>> around (when external net is 1500, and internal net is 1500-geneve
>>> overhead).
>>>
>>
>> I think this is expected since GW chassis forwards packets without going
>> through IP stack.
>> One solution might be using a network namespace on the GW node as an
>> intermediate hop, so that IP stack on the GW will handle the fragmentation
>> (or reply ICMP when DF is set). Of course this will have some latency
>> added, and also increase complexity of the deployment, so I'd rather tune
>> the MTU properly to avoid the problem. But if east-west performance is more
>> important and HV <-> HV jumbo frame is supported, then probably it worth
>> the namespace trick just to make external work regardless of internal MTU
>> settings. Does this make sense?
>>
>>
>> I believe we should avoid that path at all costs, it’s the way the
>> neutron reference implementation was built and it’s slower. Also it has a
>> lot of complexity.
>>
>>
>> Sometimes the MTU will be just mismatched the internal network/ls has a
>> bigger MTU to increase performance, but the external network is on the
>> standard 1500, in some cases such thing could be circumvented by having a
>> leg of the external router with big MTU just for ovn, but… if we look at
>> how people use openstack for example, that probably render most of the
>> deployments incompatible with ovn.
>>
>>
>> For example, customers tend to have several provider networks + external
>> networks, like legacy networks, different providers, etc.
>>
>>
>>
>>
>>> Is there any way to match packet_size > X on a flow?
>>>
>>> How could we implement this?
>>>
>> I didn't find anything for matching packet_size in ovs-fields.7. Even we
>> could do this in OVN (e.g. through controller action in slowpath), I wonder
>> is it really better than relying on IP stack. Maybe blp or someone else
>> could shed a light on this :)
>>
>> I think that would be undesirable also.
>>
>>
>> I wonder how it works now when external network is generally on 1500 MTU,
>> while Geneve has a lower mtu.
>>
> Do you mean for example: VM has MTU: 1400, while external network and eth0
> (tunnel physical interface) of HVs and GWs are all 1500 MTU? Why would
> there be a problem in this case? Or did I misunderstand?
>
>
> In that case some handling is also necessary at some point, imagine you
> have stablished a TCP connection through a floating IP (dnat), when the
> packets traverse the router from external network to internal network, if
> the router is not handling MTU, a 1500 packet will be transmitted over the
> 1400 network, and either Geneve is fragmenting/defragmenting (very bad for
> performance), or, if the packet went through VLAN, it would be dropped when
> arriving the final hypervisor.
>
>
> An I right, or am I missing something?, I need to actually try it and look
> at the traffic/packets.
>
In my example above all physical interfaces are with MTU 1500, only the
VM's internal MTU setting is 1400. In this case I don't think there is any
IP fragmentation or dropping happening, because the MSS of the TCP
connection should be adjusted by the hand-shake to fit for the MTU 1400 (or
smaller if the remote endpoint has MTU < 1400).

Or maybe you are talking about some different settings?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Segfault in tun_metadata_to_geneve__ while processing DNS requests from OVN controller

2018-07-27 Thread Markus Blank-Burian
Hello,



I am using with OpenVSwitch 2.9.2, controlled by a recent version of 
networking-ovn and have problems with segmentation faults in 
tun_metadata_to_geneve__ which are apparently caused by DNS requests generated 
by the OVN controller. The .tab pointer in the flow variable is NULL, as can be 
seen below. For completeness, I have included both the contents of the flow 
metadata and the packet data. Last, there are a few log entries with more of 
the packet processing data just before the segfault (10.14.33.16, .17 are the 
tunnel endpoints, 10.14.30.75 is a virtual router and 128.176.196.36 is a DNS 
server). To be sure, that the issue is caused by the DNS requests, I disabled 
DNS functionality manually in networking-ovn and removed all DNS entries in the 
NB database. Since then, I have observed no more segfaults.



Best regards,

Markus



Program terminated with signal 11, Segmentation fault.

#0  tun_metadata_to_geneve__ (flow=flow@entry=0x7ffc56906c90, 
b=b@entry=0x7ffc568ea958, crit_opt=crit_opt@entry=0x7ffc568ea657) at 
lib/tun-metadata.c:676

676 opt = ofpbuf_put_uninit(b, sizeof *opt + entry->loc.len);

(gdb) bt

#0  tun_metadata_to_geneve__ (flow=flow@entry=0x7ffc56906c90, 
b=b@entry=0x7ffc568ea958, crit_opt=crit_opt@entry=0x7ffc568ea657) at 
lib/tun-metadata.c:676

#1  0x55ed707199ab in tun_metadata_to_geneve_nlattr_flow (b=0x7ffc568ea958, 
flow=0x7ffc56906c50) at lib/tun-metadata.c:706

#2  tun_metadata_to_geneve_nlattr (tun=tun@entry=0x7ffc56906c50, 
flow=flow@entry=0x7ffc56906c50, key=key@entry=0x0, b=b@entry=0x7ffc568ea958) at 
lib/tun-metadata.c:810

#3  0x55ed7069e0c2 in tun_key_to_attr (a=a@entry=0x7ffc568ea958, 
tun_key=tun_key@entry=0x7ffc56906c50, 
tun_flow_key=tun_flow_key@entry=0x7ffc56906c50, key_buf=key_buf@entry=0x0, 
tnl_type=, tnl_type@entry=0x0) at lib/odp-util.c:2787

#4  0x55ed706a9592 in odp_key_from_dp_packet (buf=buf@entry=0x7ffc568ea958, 
packet=0x7ffc56906b40) at lib/odp-util.c:5674

#5  0x55ed70729340 in dpif_netlink_encode_execute (buf=0x7ffc568ea958, 
d_exec=0x7ffc569067a8, dp_ifindex=) at lib/dpif-netlink.c:1858

#6  dpif_netlink_operate__ (dpif=dpif@entry=0x55ed70bd74e0, 
ops=ops@entry=0x7ffc56906798, n_ops=n_ops@entry=1) at lib/dpif-netlink.c:1944

#7  0x55ed70729996 in dpif_netlink_operate_chunks (n_ops=1, 
ops=0x7ffc56906798, dpif=) at lib/dpif-netlink.c:2243

#8  dpif_netlink_operate (dpif_=0x55ed70bd74e0, ops=0x7ffc56906798, 
n_ops=) at lib/dpif-netlink.c:2279

#9  0x55ed706705f3 in dpif_operate (dpif=0x55ed70bd74e0, 
ops=ops@entry=0x7ffc56906798, n_ops=n_ops@entry=1) at lib/dpif.c:1359

#10 0x55ed70670dd8 in dpif_execute (dpif=, 
execute=execute@entry=0x7ffc56906830) at lib/dpif.c:1324

#11 0x55ed70623ca1 in nxt_resume (ofproto_=0x55ed70bb10c0, 
pin=0x7ffc569072b0) at ofproto/ofproto-dpif.c:4874

#12 0x55ed70610206 in handle_nxt_resume 
(ofconn=ofconn@entry=0x55ed70c1b2a0, oh=oh@entry=0x55ed70c2c970) at 
ofproto/ofproto.c:3607

#13 0x55ed7061c29b in handle_openflow__ (msg=0x55ed70bb8c50, 
ofconn=0x55ed70c1b2a0) at ofproto/ofproto.c:8130

#14 handle_openflow (ofconn=0x55ed70c1b2a0, ofp_msg=0x55ed70bb8c50) at 
ofproto/ofproto.c:8251

#15 0x55ed7064c793 in ofconn_run (handle_openflow=0x55ed7061bfd0 
, ofconn=0x55ed70c1b2a0) at ofproto/connmgr.c:1432

#16 connmgr_run (mgr=0x55ed70bb1650, 
handle_openflow=handle_openflow@entry=0x55ed7061bfd0 ) at 
ofproto/connmgr.c:363

#17 0x55ed70615fae in ofproto_run (p=0x55ed70bb10c0) at 
ofproto/ofproto.c:1816

#18 0x55ed706034ac in bridge_run__ () at vswitchd/bridge.c:2939

#19 0x55ed70609528 in bridge_run () at vswitchd/bridge.c:2997

#20 0x55ed705ff99d in main (argc=10, argv=0x7ffc56908738) at 
vswitchd/ovs-vswitchd.c:119



(gdb) print *flow

$1 = {present = {map = 1, len = 1 '\001'}, tab = 0x0, opts = {u8 = 
"\000\001\000\003", '\000' , gnv = {{opt_class = 256, type = 
0 '\000', length = 3 '\003', r3 = 0 '\000', r2 = 0 '\000', r1 = 0 '\000'}, 
{opt_class = 0, type = 0 '\000', length = 0 '\000', r3 = 0 '\000',

r2 = 0 '\000', r1 = 0 '\000'} }}}

(gdb) frame 4

#4  0x55ed706a9592 in odp_key_from_dp_packet (buf=buf@entry=0x7ffc568ea958, 
packet=0x7ffc56906b40) at lib/odp-util.c:5674

5674tun_key_to_attr(buf, >tunnel, >tunnel, NULL, NULL);

(gdb) print *packet

$2 = {mbuf = {cacheline0 = 0x7ffc56906b40, buf_addr = 0x55ed70c37710, {buf_iova 
= 94478281328073, buf_physaddr = 94478281328073}, rearm_data = 0x7ffc56906b50, 
data_off = 0, {refcnt_atomic = {cnt = 11395}, refcnt = 11395}, nb_segs = 21997, 
port = 0, ol_flags = 0,

rx_descriptor_fields1 = 0x7ffc56906b60, {packet_type = 8, {l2_type = 8, 
l3_type = 0, l4_type = 0, tun_type = 0, {inner_esp_next_proto = 0 '\000', 
{inner_l2_type = 0 '\000', inner_l3_type = 0 '\000'}}, inner_l4_type = 0}}, 
pkt_len = 89, data_len = 89, vlan_tci = 28881, hash = {rss = 21997,

  fdir = {{{hash = 21997, id = 0}, lo = 21997}, hi = 1891540728}, sched = 
{lo = 21997, hi = 

Re: [ovs-discuss] OVN - MTU path discovery

2018-07-27 Thread Miguel Angel Ajo Pelayo
On 24 July 2018 at 17:43:51, Han Zhou (zhou...@gmail.com) wrote:



On Tue, Jul 24, 2018 at 8:26 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

>
>
>
> On 24 July 2018 at 17:20:59, Han Zhou (zhou...@gmail.com) wrote:
>
>
>
> On Thu, Jul 12, 2018 at 7:03 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> I believe we need to emit ICMP / need to frag messages to have proper
>> support
>> on different MTUs (on router sides), I wonder how does it work the other
>> way
>> around (when external net is 1500, and internal net is 1500-geneve
>> overhead).
>>
>
> I think this is expected since GW chassis forwards packets without going
> through IP stack.
> One solution might be using a network namespace on the GW node as an
> intermediate hop, so that IP stack on the GW will handle the fragmentation
> (or reply ICMP when DF is set). Of course this will have some latency
> added, and also increase complexity of the deployment, so I'd rather tune
> the MTU properly to avoid the problem. But if east-west performance is more
> important and HV <-> HV jumbo frame is supported, then probably it worth
> the namespace trick just to make external work regardless of internal MTU
> settings. Does this make sense?
>
>
> I believe we should avoid that path at all costs, it’s the way the neutron
> reference implementation was built and it’s slower. Also it has a lot of
> complexity.
>
>
> Sometimes the MTU will be just mismatched the internal network/ls has a
> bigger MTU to increase performance, but the external network is on the
> standard 1500, in some cases such thing could be circumvented by having a
> leg of the external router with big MTU just for ovn, but… if we look at
> how people use openstack for example, that probably render most of the
> deployments incompatible with ovn.
>
>
> For example, customers tend to have several provider networks + external
> networks, like legacy networks, different providers, etc.
>
>
>
>
>> Is there any way to match packet_size > X on a flow?
>>
>> How could we implement this?
>>
> I didn't find anything for matching packet_size in ovs-fields.7. Even we
> could do this in OVN (e.g. through controller action in slowpath), I wonder
> is it really better than relying on IP stack. Maybe blp or someone else
> could shed a light on this :)
>
> I think that would be undesirable also.
>
>
> I wonder how it works now when external network is generally on 1500 MTU,
> while Geneve has a lower mtu.
>
Do you mean for example: VM has MTU: 1400, while external network and eth0
(tunnel physical interface) of HVs and GWs are all 1500 MTU? Why would
there be a problem in this case? Or did I misunderstand?


In that case some handling is also necessary at some point, imagine you
have stablished a TCP connection through a floating IP (dnat), when the
packets traverse the router from external network to internal network, if
the router is not handling MTU, a 1500 packet will be transmitted over the
1400 network, and either Geneve is fragmenting/defragmenting (very bad for
performance), or, if the packet went through VLAN, it would be dropped when
arriving the final hypervisor.


An I right, or am I missing something?, I need to actually try it and look
at the traffic/packets.





>
>
> Thanks,
> Han
>
>
>>
>>
>> On Wed, Jul 11, 2018 at 1:01 PM Daniel Alvarez Sanchez <
>> dalva...@redhat.com> wrote:
>>
>>> On Wed, Jul 11, 2018 at 12:55 PM Daniel Alvarez Sanchez <
>>> dalva...@redhat.com> wrote:
>>>
 Hi all,

 Miguel Angel Ajo and I have been trying to setup Jumbo frames in
 OpenStack using OVN as a backend.

 The external network has an MTU of 1900 while we have created two
 tenant networks (Logical Switches) with an MTU of 8942.

>>>
>>> s/1900/1500
>>>

 When pinging from one instance in one of the networks to the other
 instance on the other network, the routing takes place locally and
 everything is fine. We can ping with -s 3000 and with tcpdump we verify
 that the packets are not fragmented at all.

 However, when trying to reach the external network, we see that the
 packets are not tried to be fragmented and the traffic doesn't go through.

 In the ML2/OVS case (reference implementation for OpenStack
 networking), this works as we're seeing the following when attempting to
 reach a network with a lower MTU:

>>>
>>> Just to clarify, in the reference implementation (ML2/OVS) the routing
>>> takes place with iptables rules so we assume that it's the kernel
>>> processing those ICMP packets.
>>>

 10:38:03.807695 IP 192.168.20.14 > dell-virt-lab-01.mgmt.com: ICMP
 echo request, id 30977, seq 0, length 3008

 10:38:03.807723 IP overcloud-controller-0 > 192.168.20.14: ICMP
 dell-virt-lab-01.mgmt.com unreachable - need to frag (mtu 1500),
 length 556

 As you can see, the router (overcloud-controller-0) is responding to
 the instance with an ICMP need to 

Re: [ovs-discuss] [ovs-dev] ovsdb-server core dump and ovsdb corruption using raft cluster

2018-07-27 Thread Girish Moodalbail
Hello Ben,

Sorry, got distracted with something else at work. I am still able to
reproduce the issue, and this is what I have and what I did
(if you need the core, let me know and I can share it with you)

- 3-cluster RAFT setup in Ubuntu VM (2 VCPUs with 8GB RAM)
  $ uname -r
  Linux u1804-HVM-domU 4.15.0-23-generic #25-Ubuntu SMP Wed May 23 18:02:16
UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

- On all of the VMs, I have installed openvswitch-switch=2.9.2,
openvswitch-dbg=2.9.2, and ovn-central=2.9.2
  (all of these packages are from http://packages.wand.net.nz/)

- I bring up the node in the cluster one after the other -- leader 1st and
followed by two followers
- I check for cluster status and everything is healthy
- ovn-nbctl show and ovn-sbctl show is all empty

- on the leader with OVN_NB_DB set to comma-separated-NB connection strings
I did
   for i in `seq 1 50`; do ovn-nbclt ls-add ls$i; ovn-nbctl lsp-add ls$i
port0_$i; done

- Check for the presence of 50 logical switches and 50 logical ports (one
on each switch). Compact the database on all the nodes.

- Next I try to delete the ports and whilst the deletion is happening I run
compact on one of the followers

  leader_node# for i in `seq  1 50`; do ovn-nbctl lsp-del port0_$i;done
  follower_node# ovs-appctl -t /var/run/openvswitch/ovnnb_db.ctl
ovsdb-server/compact OVN_Northbound

- On the follower node I see the crash:

● ovn-central.service - LSB: OVN central components
   Loaded: loaded (/etc/init.d/ovn-central; generated)
   Active: active (running) since Thu 2018-07-26 22:48:53 PDT; 19min ago
 Docs: man:systemd-sysv-generator(8)
  Process: 21883 ExecStop=/etc/init.d/ovn-central stop (code=exited,
status=0/SUCCESS)
  Process: 21934 ExecStart=/etc/init.d/ovn-central start (code=exited,
status=0/SUCCESS)
Tasks: 10 (limit: 4915)
   CGroup: /system.slice/ovn-central.service
   ├─22047 ovsdb-server: monitoring pid 22134 (*1 crashes: pid
22048 died, killed (Aborted), core dumped*
   ├─22059 ovsdb-server: monitoring pid 22060 (healthy)
   ├─22060 ovsdb-server -vconsole:off -vfile:info
--log-file=/var/log/openvswitch/ovsdb-server-sb.log -
   ├─22072 ovn-northd: monitoring pid 22073 (healthy)
   ├─22073 ovn-northd -vconsole:emer -vsyslog:err -vfile:info
--ovnnb-db=tcp:10.0.7.33:6641,tcp:10.0.7.
   └─22134 ovsdb-server -vconsole:off -vfile:info
--log-file=/var/log/openvswitch/ovsdb-server-nb.log


Same call trace and reason:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7f79599a1801 in __GI_abort () at abort.c:79
#2  0x5596879c017c in json_serialize (json=,
s=) at ../lib/json.c:1554
#3  0x5596879c01eb in json_serialize_object_member (i=,
s=, node=, node=) at
../lib/json.c:1583
#4  0x5596879c0132 in json_serialize_object (s=0x7ffc17013bf0,
object=0x55968993dcb0) at ../lib/json.c:1612
#5  json_serialize (json=, s=0x7ffc17013bf0) at
../lib/json.c:1533
#6  0x5596879c249c in json_to_ds (json=json@entry=0x559689950670,
flags=flags@entry=0, ds=ds@entry=0x7ffc17013c80) at ../lib/json.c:1511
#7  0x5596879ae8df in ovsdb_log_compose_record
(json=json@entry=0x559689950670,
magic=0x55968993dc60 "CLUSTER", header=header@entry=0x7ffc17013c60,
data=data@entry=0x7ffc17013c80) at ../ovsdb/log.c:570
#8  0x5596879aebbf in ovsdb_log_write (file=0x5596899b5df0,
json=0x559689950670) at ../ovsdb/log.c:618
#9  0x5596879aed3e in ovsdb_log_write_and_free
(log=log@entry=0x5596899b5df0,
json=0x559689950670) at ../ovsdb/log.c:651
#10 0x5596879b0954 in raft_write_snapshot (raft=raft@entry=0x5596899151a0,
log=0x5596899b5df0, new_log_start=new_log_start@entry=166,
new_snapshot=new_snapshot@entry=0x7ffc17013e30) at ../ovsdb/raft.c:3588
#11 0x5596879b0ec3 in raft_save_snapshot (raft=raft@entry=0x5596899151a0,
new_start=new_start@entry=166, new_snapshot=new_snapshot@entry
=0x7ffc17013e30)
at ../ovsdb/raft.c:3647
#12 0x5596879b8aed in raft_store_snapshot (raft=0x5596899151a0,
new_snapshot_data=new_snapshot_data@entry=0x5596899505f0) at
../ovsdb/raft.c:3849
#13 0x5596879a579e in ovsdb_storage_store_snapshot__
(storage=0x5596899137a0, schema=0x559689938ca0, data=0x559689946ea0) at
../ovsdb/storage.c:541
#14 0x5596879a625e in ovsdb_storage_store_snapshot
(storage=0x5596899137a0, schema=schema@entry=0x559689938ca0,
data=data@entry=0x559689946ea0)
at ../ovsdb/storage.c:568
#15 0x55968799f5ab in ovsdb_snapshot (db=0x5596899137e0) at
../ovsdb/ovsdb.c:519
#16 0x559687999f23 in ovsdb_server_compact (conn=0x559689938440,
argc=, argv=, dbs_=0x7ffc170141c0) at
../ovsdb/ovsdb-server.c:1443
#17 0x5596879d9cc0 in process_command (request=,
conn=0x559689938440) at ../lib/unixctl.c:315
#18 run_connection (conn=0x559689938440) at ../lib/unixctl.c:349
#19 unixctl_server_run (server=0x559689937370) at ../lib/unixctl.c:400
#20 0x559687996e1e in main_loop (is_backup=0x7ffc1701412e,
exiting=0x7ffc1701412f, run_process=0x0,