Re: [ovs-discuss] VLAN mode=dot1q-tunnel and tags in OVS

2018-11-27 Thread Eric Garver
On Mon, Nov 19, 2018 at 07:32:46AM -0800, Sim Paul wrote:
> >
> >
> > > I am still trying to understand the test case behavior that i pasted in
> > my
> > > previous email.
> > > In my first test case when vlan-limit=1, the ping worked because
> > > only the outside VLAN tag (36) was inspected ??
> > > But in second case when i set vlan-limit=2, ping stopped working because
> > > both tags 36 and 120 were inspected ?
> > >
> > > Shouldn't the ping work even in second test case ?
> >
> > I'm not sure. Your configuration is a big odd. dot1q-tunnel should only
> > be configured at the ends, but it sounds like you've added it to the
> > patch ports as well.
> >
> > Are you saying you are able to ping a virtual machine sitting on a
> neighboring ovs bridge
> by simply configuring dot1q-tunnel at the end points (VM NICs) ? Plz
> confirm.
> For me, if i don't configure all 4 ports(two VM VNICs and two patch ports)
> as dot1q-tunnel,
> VM1 sitting on ovsbr1 CANNOT ping VM2 sitting on ovsbr2.

Sorry for the delay. I was on holiday and traveling.

I took another look. Using dot1q-tunnel on an already double tagged
packet will not work if vlan-limit == 2 or if vlan-limit == 0.

During the xlate phase the dot1q-tunnel temporarily pushes another VLAN
tag to the internal xlate structures - think of it as an implicit
push_vlan. Because the flow already has two tags it shifts the VLANs to
the right and the right most VLAN is lost. On the other-end the
dot1q-tunnel VLAN is stripped leaving a single VLAN.
i.e.


ingress input:  [VLAN 36] [VLAN 100]
ingress output: [VLAN xx] [VLAN 36]
 egress input:  [VLAN xx] [VLAN 36]
 egress output:   [VLAN 36]

where "xx" is your dot1q-tunnel tag.

So why does it work with vlan-limit=1 ?

Recall that with vlan-limit=1 the second VLAN is _not_ parsed as a VLAN
(it'll be the dl_type). The xlate structure has slots for 2 VLANs
regardless of the value of vlan-limit. This means the temporary/internal
shift of the VLAN works as there is room (only one VLAN was parsed).

Possible fix:

I think struct xvlan could be of size FLOW_MAX_VLAN_HEADERS + 1 to allow
the temporary/internal shift caused by dot1q-tunnel. Although I'm not
sure if this would cause a regression elsewhere.

Can you try this untested patch?

diff --git a/ofproto/ofproto-dpif-xlate.c b/ofproto/ofproto-dpif-xlate.c
index 507e14dd0d00..4f86e7704a50 100644
--- a/ofproto/ofproto-dpif-xlate.c
+++ b/ofproto/ofproto-dpif-xlate.c
@@ -418,7 +418,8 @@ struct xvlan_single {
 };

 struct xvlan {
-struct xvlan_single v[FLOW_MAX_VLAN_HEADERS];
+   /* Add 1 to the size to allow temporary/internal shift for dot1q-tunnel. */
+struct xvlan_single v[FLOW_MAX_VLAN_HEADERS + 1];
 };

 const char *xlate_strerror(enum xlate_error error)
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS DPDK performance issue with inter-NUMA data paths

2018-11-27 Thread Onkar Pednekar
Hi all,

I am able to get expected performance using ovs dpdk on a single socket
system.
But on a system with 2 NUMA nodes, the throughput is less than expected.

The system has 8 physical cores each socket with hyperthreading enabled. So
total 32 cores.

Only one physical 10G interface is being used which after binding to dpdk
gets associated with socket 1.

The OVS passes the traffic from this interface to dpdkvhostuser interfaces
of 2 VMs, the VCPUS of each VM are pinned to physical cores from different
sockets.

So the traffic flows is as follows:
PHY <-> VM1 <-> PHY
PHY <-> VM2 <-> PHY

Since the only physical dpdk interface is associated with socket1, i see
that the pmd core on socket 1 is 100% utilized but no work is done by the
core of socket 2 where the other pmd thread is pinned. I know this is
expected since there are no dpdk interfaces associated with socket2. But
since I have VMs pinned to cores of socket 2, there is a cross node packet
transfer which I think is affecting the performance.

I wanted to know if there is any configuration or parameter that can help
optimized this inter-NUMA data path.

Thanks,
Onkar
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS DPDK performance for TCP traffic versus UDP

2018-11-27 Thread Onkar Pednekar
Hi,

I managed to solve this performance issue. I got improved performance after
turning off the mrg_rxbuf and increasing the rx and tx queue sizes to 1024.

Thanks,
Onkar

On Thu, Nov 8, 2018 at 2:57 PM Onkar Pednekar  wrote:

> Hi,
>
> We figured out that the packet processing appliance within VM (which reads
> from raw socket on the dpdk vhost user interface) requires more packets per
> second to give higher throughput. Else its cpu utilization is idle most of
> the times.
>
> We increased the "tx-flush-interval" from default 0 to 500 and the
> throughput increased from 300 mbps to 600 mbps (but we expect 1G). Also, we
> saw that the PPS on the VM RX interface increased from 35 kpps to 68 kpps.
> Higher values of "tx-flush-interval" doesn't help.
>
> Also disabling mgr_rxbuf seems to give better performance, i.e.
> virtio-net-pci.mgr_rx_buf=off in qemu. But still the pps are around 65 k on
> the VM dpdk vhostuser interface RX and the throughput below 700 mbps.
>
> *Are there any other parameters that can be tuned to increase the amount
> of packets per second forwarded from phy dpdk interface to the dpdk
> vhostuser interface inside the VM?*
>
> Thanks,
> Onkar
>
> On Fri, Oct 5, 2018 at 1:45 PM Onkar Pednekar  wrote:
>
>> Hi Tiago,
>>
>> Sure. I'll try that.
>>
>> Thanks,
>> Onkar
>>
>> On Fri, Oct 5, 2018 at 9:06 AM Lam, Tiago  wrote:
>>
>>> Hi Onkar,
>>>
>>> Thanks for shedding some light.
>>>
>>> I don't think your difference in performance will have to do your
>>> OvS-DPDK setup. If you're taking the measurements directly from the
>>> iperf server side you'd be going through the "Internet". Assuming you
>>> don't have a dedicated connection there, things like your connection's
>>> bandwidth, the RTT from end to end start to matter considerably,
>>> specially for TCP.
>>>
>>> To get to the bottom of it I'd advise you to take the iperf server and
>>> connect it directly to the first machine (Machine 1). You would be
>>> excluding any "Internet" interference and be able to get the performance
>>> of a pvp scenario first.
>>>
>>> Assuming you're using kernel forwarding inside the VMs, if you want to
>>> squeeze in the extra performance it is probably wise to use DPDK testpmd
>>> to forward the traffic inside of the VMs as well, as explained here:
>>>
>>> http://docs.openvswitch.org/en/latest/howto/dpdk/#phy-vm-phy-vhost-loopback
>>>
>>> Regards,
>>> Tiago.
>>>
>>> On 04/10/2018 21:06, Onkar Pednekar wrote:
>>> > Hi Tiago,
>>> >
>>> > Thanks for your reply.
>>> >
>>> > Below are the answers to your questions in-line.
>>> >
>>> >
>>> > On Thu, Oct 4, 2018 at 4:07 AM Lam, Tiago >> > > wrote:
>>> >
>>> > Hi Onkar,
>>> >
>>> > Thanks for your email. Your setup isn't very clear to me, so a few
>>> > queries in-line.
>>> >
>>> > On 04/10/2018 06:06, Onkar Pednekar wrote:
>>> > > Hi,
>>> > >
>>> > > I have been experimenting with OVS DPDK on 1G interfaces. The
>>> > system has
>>> > > 8 cores (hyperthreading enabled) mix of dpdk and non-dpdk capable
>>> > ports,
>>> > > but the data traffic runs only on dpdk ports.
>>> > >
>>> > > DPDK ports are backed by vhost user netdev and I have configured
>>> the
>>> > > system so that hugepages are enabled, CPU cores isolated with PMD
>>> > > threads allocated to them and also pinning the VCPUs.>
>>> > > When I run UDP traffic, I see ~ 1G throughput on dpdk interfaces
>>> > with <
>>> > > 1% packet loss. However, with tcp traffic, I see around 300Mbps
>>> > > thoughput. I see that setting generic receive offload to off
>>> > helps, but
>>> > > still the TCP thpt is very less compared to the nic capabilities.
>>> > I know
>>> > > that there will be some performance degradation for TCP as
>>> against UDP
>>> > > but this is way below expected.
>>> > >
>>> >
>>> > When transmitting traffic between the DPDK ports, what are the
>>> flows you
>>> > have setup? Does it follow a p2p or pvp setup? In other words,
>>> does the
>>> > traffic flow between the VM and the physical ports, or only between
>>> > physical ports?
>>> >
>>> >
>>> >  The traffic is between the VM and the physical ports.
>>> >
>>> >
>>> > > I don't see any packets dropped for tcp on the internal VM
>>> (virtual)
>>> > > interfaces.
>>> > >
>>> > > I would like to know if there is an settings (offloads) for the
>>> > > interfaces or any other config I might be missing.
>>> >
>>> > What is the MTU set on the DPDK ports? Both physical and
>>> vhost-user?
>>> >
>>> > $ ovs-vsctl get Interface [dpdk0|vhostuserclient0] mtu
>>> >
>>> >
>>> > MTU set on physical ports = 2000
>>> > MTU set on vhostuser ports = 1500
>>> >
>>> >
>>> > This will help to clarify some doubts around your setup first.
>>> >
>>> > Tiago.
>>> >
>>> > >
>>> > > Thanks,
>>> > > Onkar
>>> > >
>>> > >
>>> > > 

Re: [ovs-discuss] Issue when using local_ip with VXLAN tunnels in OVS

2018-11-27 Thread Gregory Rose

Siva,

You have a routing issue.

See interalia
https://github.com/OpenNebula/one/issues/2161
http://wwwaem.brocade.com/content/html/en/brocade-validated-design/brocade-vcs-fabric-ip-storage-bvd/GUID-CB5BFC4D-B2BE-4E9C-BA91-7E7E9BD35FCC.html
http://blog.arunsriraman.com/2017/02/how-to-setting-up-gre-or-vxlan-tunnel.html

For this to work you must be able to ping from the local IP to the 
remote IP *through* the remote IP address.As we have seen that doesn't work.


As an aside, why do you have two bridges to the same VMs?  Your 
configuration makes it impossible to
set a route because  you have two sets of IP addresses and routes all on 
two bridges going into the same
VMs.  In that configuration the local ip option makes  no sense. You 
don't need it - you're already bridged.


I understand that you have seen the gre configuration work and I'm not 
sure why because it has the same
requirements for the local ip to be routable through the remote ip. And 
again, there is no point to the
local ip option because the ip addresses do not need to be routed to 
reach each other.


In any case, I'm going to set up a valid configuration and then make 
sure that the local ip option does work

or not.  I'll report back when I'm done.

Thanks,

- Greg

On 11/20/2018 10:13 AM, Gregory Rose wrote:


On 11/20/2018 10:03 AM, Siva Teja ARETI wrote:



On Tue, Nov 20, 2018 at 12:59 PM Gregory Rose > wrote:


On 11/19/2018 6:30 PM, Siva Teja ARETI wrote:


[user@hyp-1] ip route
default via A.B.C.D dev enp5s0  proto static  metric 100
10.10.0.0/24  dev testbr0  proto kernel 
scope link  src 10.10.0.1 linkdown
20.20.0.0/24  dev testbr1  proto kernel 
scope link  src 20.20.0.1
30.30.0.0/24  dev testbr2  proto kernel 
scope link  src 30.30.0.1


Hi Siva,

I'm curious about these bridges.  Are they Linux bridges or OVS
bridges?

If they are Linux bridges please provide the output of 'brctl show'.
If they are OVS bridges then please provide the output of
'ovs-vsctl show'.

Thanks!

- Greg


Hi Greg,

These are linux bridges.

[user@hyp1 ] brctl show
bridge namebridge idSTP enabledinterfaces
docker08000.02428928dba5noveth6079ee7
testbr08000.yes
testbr18000.fe540005937cyesvnet2
vnet5
testbr28000.fe540079ef92yesvnet1
vnet4
virbr08000.fe54000ad370yesvnet0
vnet3

 Siva Teja.


Thanks Siva!  I'll follow up when I have more questions and/or results.

- Greg


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Recirculation context in dpdk-ovs

2018-11-27 Thread Lam, Tiago
Hi,

A few comments in-line.

On 27/11/2018 13:20, 张萌 wrote:
>  Hi,
> 
>    I`m using “ovs-appctl ofproto/trace “ to trace the flows in ovs-dpdk.
> 
>    When integrated with conntrack, the ovs rule ended in the
> table=10, which will record the ct as the flowing flow:
> 
>  
> 
> -
> 
> [root@zm ~]# ovs-ofctl dump-flows br0 -O openflow15 table=10
> 
> OFPST_FLOW reply (OF1.5) (xid=0x2):
> 
> cookie=0x156ad2f7efd2d389, duration=15058.242s, table=10, n_packets=0,
> n_bytes=0, priority=3000,ip,nw_frag=later actions=goto_table:20
> 
> cookie=0x156ad2f7efd2d333, duration=15058.249s, table=10, n_packets=737,
> n_bytes=72226, priority=2000,icmp
> actions=ct(table=15,zone=NXM_NX_REG6[0..15])
> 
> cookie=0x156ad2f7efd2d337, duration=15058.249s, table=10,
> n_packets=4992, n_bytes=380540, priority=2000,udp
> actions=ct(table=15,zone=NXM_NX_REG6[0..15])
> 
> cookie=0x156ad2f7efd2d367, duration=15058.245s, table=10,
> n_packets=2028037440, n_bytes=183176086711, priority=2000,tcp
> actions=ct(table=15,zone=NXM_NX_REG6[0..15])
> 
> -
> 
>  
> 
>  
> 
>  
> 
>    And when I mock a packet using ofproto/trace, ovs recorded the
> contrack, and prints:
> 
>  
> 
> -
>   
> 
> 
> [root@ zm ~]# ovs-appctl ofproto/trace br0
> tcp,in_port=25,nw_dst=172.19.11.6,tp_dst=320,dl_dst=fa:16:3e:03:39:5f,dl_src=fa:16:3e:e5:cb:2c
>   
>    
> 
> Flow:
> tcp,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0
> 
>  
> 
> bridge("br0")
> 
> -
> 
>  0. in_port=25, priority 100, cookie 0x156ad2f7efd2d4fb
> 
>     set_field:0x29->reg5
> 
>     set_field:0x19->reg6
> 
>     write_metadata:0x290001
> 
>     goto_table:5
> 
>  5. ip,in_port=25,dl_src=fa:16:3e:e5:cb:2c, priority 100, cookie
> 0x156ad2f7efd2d51f
> 
>     goto_table:10
> 
> 10. tcp, priority 2000, cookie 0x156ad2f7efd2d367
> 
>     ct(table=15,zone=NXM_NX_REG6[0..15])
> 
>     drop
> 
>  
> 
> Final flow:
> tcp,reg5=0x29,reg6=0x19,metadata=0x290001,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0
> 
> Megaflow:
> recirc_id=0,tcp,in_port=25,dl_src=fa:16:3e:e5:cb:2c,nw_dst=172.0.0.0/6,nw_frag=no
> 
> Datapath actions: ct(zone=25),recirc(0x4123)   
> 
> -
> 
>  
> 
>    Bug when I set the recirc_id in the flow, ovs puts:
> 
> -
> 
> [root@zm ~]# ovs-appctl ofproto/trace br0
> recirc_id=0x4123,ct_state=new,tcp,in_port=25,nw_dst=172.19.11.6,tp_dst=320,dl_dst=fa:16:3e:03:39:5f,dl_src=fa:16:3e:e5:cb:2c
> 
> Flow:
> recirc_id=0x4123,ct_state=new,tcp,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0
> 
>  
> 
> bridge("br0")
> 
> -
> 
>   Recirculation context not found for ID 4123 
> 
>  
> 
> Final flow: unchanged
> 
> Megaflow: recirc_id=0x4123,ip,in_port=25,nw_frag=no
> 
> Datapath actions: drop
> 
> Translation failed (No recirculation context), packet is dropped.
> 

I believe you're getting the above message because by the time you issue
the command the re-circulation context is already gone.

>  
> 
> -
> 
>  
> 
>    And when dump the contracks in ovs:
> 
> -
> 
>  
> 
> [root@A04-R08-I137-204-9320C72 ~]# ovs-dpctl dump-conntrack ovs-netdev 
> 
> 2018-11-27T05:01:30Z|1|dpif_netlink|WARN|Generic Netlink family
> 'ovs_datapath' does not exist. The Open vSwitch kernel module is
> probably not loaded.
> 
> ovs-dpctl: opening datapath (No such file or directory)
> 

Use the one below instead. That should give you more information

$ovs-appctl dpctl/dump-conntrack

> -
> 
>  
> 
>    Can anyone tells how to mock a packet can pass the ct in dpdk-ovs
> 

What are you 

Re: [ovs-discuss] ovs-controller - Trivial reference controller packaged with Open vSwitch

2018-11-27 Thread Ben Pfaff
On Tue, Nov 27, 2018 at 09:09:45AM +, Avi Cohen (A) wrote:
> 
> 
> > -Original Message-
> > From: Ben Pfaff [mailto:b...@ovn.org]
> > Sent: Monday, 26 November, 2018 9:30 PM
> > To: Avi Cohen (A)
> > Cc: ovs-discuss
> > Subject: Re: [ovs-discuss] ovs-controller - Trivial reference controller
> > packaged with Open vSwitch
> > 
> > On Mon, Nov 26, 2018 at 04:22:45PM +, Avi Cohen (A) wrote:
> > > I need a lite openflow controller to configure my OVS .
> > 
> > Why?
> [Avi Cohen (A)]  I don't have time to learn an OF-controller. I need it just 
> for run-time flows installation - I will run my application there.  Other OVS 
> configuration is done at boot time   

You probably don't need a controller at all.  If you do,
ovs-testcontroller isn't going to help much.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] install OVS on freebsd

2018-11-27 Thread Ben Pfaff
Please don't drop the mailing list.

The first line in the document you cite is "This document describes how
to build and install Open vSwitch on a generic Linux, FreeBSD, or NetBSD
host."  On FreeBSD, which parts of it are wrong?  If it is inaccurate,
we would like to fix it.

On Tue, Nov 27, 2018 at 09:24:01AM +0330, Ali Forouzan wrote:
> installation guide in 
> “http://docs.openvswitch.org/en/latest/intro/install/general/” wrote for 
> Linux and “Build Requirements” and “Starting” section are for this 
> environment too and I can’t find any installation guide for FreeBSD.
> 
> Anyway if you could make and install OVS successfully all directories that 
> you want and you need, are different from Linux install guide. 
> 
> Sent from Mail for Windows 10
> 
> From: Ben Pfaff
> Sent: Monday, November 26, 2018 5:53 PM
> To: Ali Forouzan
> Cc: ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] install OVS on freebsd
> 
> On Mon, Nov 26, 2018 at 11:02:28AM +0330, Ali Forouzan via discuss wrote:
> > I want install OVS based on netmap/vale on FreeBSD 11.1. for this purpose 
> > (install OVS that work with NETMAP/VALE technology) in first step should 
> > install OVS on FreeBSD and I can’t find any guide for “how to install OVS 
> > on FreeBSD” and can’t install it successfully .
> > 
> > Can help me to install OVS on FreeBSD? 
> 
> Did you try following the installation guide?  If so, what problem did
> you encounter?
> 
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Recirculation context in dpdk-ovs

2018-11-27 Thread 张萌
 Hi,

   I`m using “ovs-appctl ofproto/trace “ to trace the flows in ovs-dpdk.

   When integrated with conntrack, the ovs rule ended in the table=10, 
which will record the ct as the flowing flow:

 

-

[root@zm ~]# ovs-ofctl dump-flows br0 -O openflow15 table=10

OFPST_FLOW reply (OF1.5) (xid=0x2):

cookie=0x156ad2f7efd2d389, duration=15058.242s, table=10, n_packets=0, 
n_bytes=0, priority=3000,ip,nw_frag=later actions=goto_table:20

cookie=0x156ad2f7efd2d333, duration=15058.249s, table=10, n_packets=737, 
n_bytes=72226, priority=2000,icmp actions=ct(table=15,zone=NXM_NX_REG6[0..15])

cookie=0x156ad2f7efd2d337, duration=15058.249s, table=10, n_packets=4992, 
n_bytes=380540, priority=2000,udp actions=ct(table=15,zone=NXM_NX_REG6[0..15])

cookie=0x156ad2f7efd2d367, duration=15058.245s, table=10, n_packets=2028037440, 
n_bytes=183176086711, priority=2000,tcp 
actions=ct(table=15,zone=NXM_NX_REG6[0..15])

-

 

 

 

   And when I mock a packet using ofproto/trace, ovs recorded the contrack, 
and prints:

 

-
  

[root@ zm ~]# ovs-appctl ofproto/trace br0 
tcp,in_port=25,nw_dst=172.19.11.6,tp_dst=320,dl_dst=fa:16:3e:03:39:5f,dl_src=fa:16:3e:e5:cb:2c
  

Flow: 
tcp,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0

 

bridge("br0")

-

 0. in_port=25, priority 100, cookie 0x156ad2f7efd2d4fb

set_field:0x29->reg5

set_field:0x19->reg6

write_metadata:0x290001

goto_table:5

 5. ip,in_port=25,dl_src=fa:16:3e:e5:cb:2c, priority 100, cookie 
0x156ad2f7efd2d51f

goto_table:10

10. tcp, priority 2000, cookie 0x156ad2f7efd2d367

ct(table=15,zone=NXM_NX_REG6[0..15])

drop

 

Final flow: 
tcp,reg5=0x29,reg6=0x19,metadata=0x290001,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0

Megaflow: 
recirc_id=0,tcp,in_port=25,dl_src=fa:16:3e:e5:cb:2c,nw_dst=172.0.0.0/6,nw_frag=no

Datapath actions: ct(zone=25),recirc(0x4123)   

-

 

   Bug when I set the recirc_id in the flow, ovs puts:

-

[root@zm ~]# ovs-appctl ofproto/trace br0 
recirc_id=0x4123,ct_state=new,tcp,in_port=25,nw_dst=172.19.11.6,tp_dst=320,dl_dst=fa:16:3e:03:39:5f,dl_src=fa:16:3e:e5:cb:2c

Flow: 
recirc_id=0x4123,ct_state=new,tcp,in_port=25,vlan_tci=0x,dl_src=fa:16:3e:e5:cb:2c,dl_dst=fa:16:3e:03:39:5f,nw_src=0.0.0.0,nw_dst=172.19.11.6,nw_tos=0,nw_ecn=0,nw_ttl=0,tp_src=0,tp_dst=320,tcp_flags=0

 

bridge("br0")

-

  Recirculation context not found for ID 4123 

 

Final flow: unchanged

Megaflow: recirc_id=0x4123,ip,in_port=25,nw_frag=no

Datapath actions: drop

Translation failed (No recirculation context), packet is dropped.

 

-

 

   And when dump the contracks in ovs:

-

 

[root@A04-R08-I137-204-9320C72 ~]# ovs-dpctl dump-conntrack ovs-netdev 

2018-11-27T05:01:30Z|1|dpif_netlink|WARN|Generic Netlink family 
'ovs_datapath' does not exist. The Open vSwitch kernel module is probably not 
loaded.

ovs-dpctl: opening datapath (No such file or directory)

-

 

   Can anyone tells how to mock a packet can pass the ct in dpdk-ovs

 

 

Thanks

zhangmeng___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovs-controller - Trivial reference controller packaged with Open vSwitch

2018-11-27 Thread Avi Cohen (A)



> -Original Message-
> From: Ben Pfaff [mailto:b...@ovn.org]
> Sent: Monday, 26 November, 2018 9:30 PM
> To: Avi Cohen (A)
> Cc: ovs-discuss
> Subject: Re: [ovs-discuss] ovs-controller - Trivial reference controller
> packaged with Open vSwitch
> 
> On Mon, Nov 26, 2018 at 04:22:45PM +, Avi Cohen (A) wrote:
> > I need a lite openflow controller to configure my OVS .
> 
> Why?
[Avi Cohen (A)]  I don't have time to learn an OF-controller. I need it just 
for run-time flows installation - I will run my application there.  Other OVS 
configuration is done at boot time   
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss