[ovs-discuss] single thread or multi thread for packet process in kernel

2016-12-07 Thread sundk
in kernel mode, ovs use a hook function(netdev_frame_hook) to receive pkts from 
kernel and then   process pkts use other function(netdev_port_receive). all of 
these functions are in single thread as I can't see any thread create code in 
kernel pkt process. it means that only after pkt process complete the hook 
function can return. only in this way another pkt can enter the hook function. 
it means the pkt process in kernel mode is single thread. if not, how does 
multi thread realize?

Thanks
Tony Sun


发自我的 iPhone




___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Questions about datapath SNAT flag

2016-12-07 Thread 张东亚
Hi List,

We are not developing new product with OVS NAT, however when test we found
a question about the SNAT state flag.

Support I have the following flows:

 cookie=0x5847fc2c, duration=54000.281s, table=50,
n_packets=3928155, n_bytes=55143896854, priority=100,ip,metadata=0x1
actions=ct(table=51,zone=1,nat)
 cookie=0x5847fc2c, duration=54009.712s, table=51,
n_packets=229367, n_bytes=18280827,
priority=100,ct_state=-snat,ct_zone=1,ip,metadata=0x1
actions=ct(commit,table=65,zone=1,nat(src=169.254.10.0))
 cookie=0x5847fc2c, duration=54009.712s, table=51,
n_packets=3698797, n_bytes=55125616909,
priority=100,ct_state=+snat,ct_zone=1,ip,metadata=0x1 actions=goto_table:65

When a unidirectional flow (IP_CT_NEW in conntrack term, for example ping a
host that is not online) pass the rules, even the first packet is
committed, the second and later packet still need match the flow with
ct_state=-snat, after I have read the datapath code, it seems the following
code have special process for unidirectional flow (ctinfo != IP_CT_NEW):

if (info->nat & OVS_CT_NAT && ctinfo != IP_CT_NEW &&
ct->status & IPS_NAT_MASK &&
(ctinfo != IP_CT_RELATED || info->commit)) {
/* NAT an established or related connection like before. */
if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY)
/* This is the REPLY direction for a connection
 * for which NAT was applied in the forward
 * direction.  Do the reverse NAT.
 */
maniptype = ct->status & IPS_SRC_NAT
? NF_NAT_MANIP_DST : NF_NAT_MANIP_SRC;
else
maniptype = ct->status & IPS_SRC_NAT
? NF_NAT_MANIP_SRC : NF_NAT_MANIP_DST;
} else if (info->nat & OVS_CT_SRC_NAT) {
maniptype = NF_NAT_MANIP_SRC;
} else if (info->nat & OVS_CT_DST_NAT) {
maniptype = NF_NAT_MANIP_DST;
} else {
return NF_ACCEPT; /* Connection is not NATed. */
}

However, for bidirectional flow (IP_CT_ESTABLIED in conntrack term), the
bare nat action will do nat directly and the second flow will not matched.

My questions is whether this behavior is expected? or it is used to mimic
the linux conntrack&nat behavior.

I think if we can use bare nat to do nat for the unidirectional second and
later packet, we can reduce a datapath flow and a upcall, which may seem
boost performance a little better.

Thanks a lot.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Re: On OVNDB-HA using pacemaker two issues discussed

2016-12-07 Thread 李国帅 - 姜尚0387
Systemd also manages the ovsdb process, will there be a conflict?
--发件人:Babu 
Shanmugam 发送时间:2016年12月8日(星期四) 09:26收件人:姜尚0387 
; discuss ; Andy Zhou 
; blp 主 题:Re: [ovs-discuss] On OVNDB-HA using 
pacemaker two issues discussed

  

  
  





On Wednesday 07 December 2016 08:58 AM,
  姜尚0387 wrote:



  
   
  I am learning OVNDB-HA using pacemaker, I feel this is a good
design. 
   
  At the same time I have two questions and would like to
discuss:

  
   
    1,  we have two resources, one is the master node of the
OVNDB, one is VIP.
          Pacemaker resource constraints are dependencies, not
symbiotic, so we must consider who depends on who.
       
 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/s-resource-colocation.html
   
         If the master node OVNDB dependence VIP,  VIP or VIP
network down, OVNDB master node will follow the VIP migration,
But the OVNDB process of the master node is dead and there is no
migration.
   
     If the VIP is dependent on the OVNDB master process, 
When the OVNDB master process down, all will migrate. And VIP or
VIP network down Indicating that the node is down, all will
migration.
   
     Is it better to configure the resource constraints ?

  
   
pcs constraint order ovndb_servers-master then VirtualIP   
  pcs constraint colocation add VirtualIP with master ovndb_servers-master 
score=INFINITY
   
  

  
   
    2,  OVN nodes in a total of three processes, the South to the
database, north to the database, northd process.



   But only in the OCF script to start and monitor the
database process, there is no northd process.



 How to ensure high reliability northd process?



 My idea is OCF script at the same time start and
monitor northd process, because I think these three processes is
together.

  
   
  

  
   



We have the systemd resource for ovn-northd. One can create
primitive systemd pacemaker resource for northd which will start
ovn-north in another in cluster when the node in which it runs
fails.

  

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packets not forwarded to queues after hitting flows in OVS QoS

2016-12-07 Thread Ben Pfaff
It's pretty surprising to see that only queue 2 is ever set in any of
the datapath flows you show, that is, only set(skb_priority(0x10002)).

What are all of these "transl-" prefixes?  What software are you using?

On Wed, Dec 07, 2016 at 10:00:00PM +0100, Santhosh R P wrote:
> Hi Ben,
> 
> Thanks for the response. I have provided the flows dump here.
> 
> Host MAC Addr: 90:1b:0e:06:0e:c4
> VM1 MAC Addr: 52:54:00:3e:2c:cf
> 
> When the flows are set as below, no QoS takes place. Bandwidth achieved is
> ~880mbps.
> 
> transl-ofctl part-flows test
> 
> ovs-ofctl add-flow test priority=2000,in_port=2,actions=set_queue:1,normal
> ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal
> transl-ofctl add-flow test priority = 10 actions = normal
> 
> *Before iperf test:*
> 
> [Root @ omi-student15 openvswitch] # transl-dpctl dump flows
> recirc_id(0),in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:
> 00:5e:00:01:01),eth_type(0x0800),ipv4(frag=no), packets:2, bytes:160,
> used:2.706s, actions:1 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=90:1b:0e:06:0e:c4),eth_type(0x0800),ipv4(frag=no),
> packets:3, bytes:640, used:2.733s, actions:2
> 
> 
> *During iperf test:*
> 
> [Root @ omi-student15 openvswitch] # transl-dpctl dump flows recirc_id(0)
> *,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=52:54:00:3e:2c:cf),*eth_type(0x0800),ipv4(frag=no),
> *packets:12516, *bytes:846036, used:0.000s, flags:SP., actions:3
> recirc_id(0),*in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:00:5e:00:01:01)*
> ,eth_type(0x0800),ipv4(frag=no),* packets:5413,* bytes:232047658,
> used:0.001s, flags:SP., actions:1 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.235,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),skb_priority(0)
> *,in_port(3),eth(src=52:54:00:3e:2c:cf,dst=90:1b:0e:06:0e:c4)*
> ,eth_type(0x0800),ipv4(frag=no), *packets:5412, *bytes:232048096,
> used:0.001s, flags:SP., actions:set(skb_priority(0x10002)),2
> recirc_id(0),in_port(1),eth(src=00:15:2c:fa:bb:80,dst=ff:
> ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=134.60.30.2,tip=134.60.30.220,op=1/0xff),
> packets:0, bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.103,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=90:1b:0e:06:0e:c4),eth_type(0x0800),ipv4(frag=no),
> packets:12520, bytes:847246, used:0.000s, flags:SP., actions:2
> 
> [Root @ omi-student15 openvswitch] # transl-dpctl dump flows recirc_id(0)
> *,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=52:54:00:3e:2c:cf)*,eth_type(0x0800),ipv4(frag=no),
> *packets:38918, *bytes:2611380, used:0.001s, flags:SP., actions:3
> recirc_id(0)*,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:00:5e:00:01:01)*
> ,eth_type(0x0800),ipv4(frag=no),* packets:12524, *bytes:534365224,
> used:0.001s, flags:SP., actions:1 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.235,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),skb_priority(0)
> *,in_port(3),eth(src=52:54:00:3e:2c:cf,dst=90:1b:0e:06:0e:c4)*
> ,eth_type(0x0800),ipv4(frag=no),* packets:12521,* bytes:534365726,
> used:0.001s, flags:SP., actions:set(skb_priority(0x10002)),2
> recirc_id(0),in_port(1),eth(src=00:15:2c:fa:bb:80,dst=ff:
> ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=134.60.30.2,tip=134.60.30.220,op=1/0xff),
> packets:0, bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.103,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
> src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
> 0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
> bytes:0, used:never, actions:2,3,4 rec

Re: [ovs-discuss] On OVNDB-HA using pacemaker two issues discussed

2016-12-07 Thread Andy Zhou
On Tue, Dec 6, 2016 at 7:28 PM, 姜尚0387  wrote:

> I am learning OVNDB-HA using pacemaker, I feel this is a good design.
>
> At the same time I have two questions and would like to discuss:
>
>   1,  we have two resources, one is the master node of the OVNDB, one is
> VIP.
>
>Pacemaker resource constraints are dependencies, not symbiotic, so
> we must consider who depends on who.
>
>http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/
> Pacemaker_Explained/s-resource-colocation.html
>
>If the master node OVNDB dependence VIP,  VIP or VIP network down,
> OVNDB master node will follow the VIP migration, But the OVNDB process of
> the master node is dead and there is no migration.
>
>If the VIP is dependent on the OVNDB master process,  When the
> OVNDB master process down, all will migrate. And VIP or VIP network down
> Indicating that the node is down, all will migration.
>
>Is it better to configure the resource constraints ?
>
>   pcs constraint order ovndb_servers-master then VirtualIP
>   pcs constraint colocation add VirtualIP with master ovndb_servers-master 
> score=INFINITY
>
>
> OVSDB-HA is mainly designed to provide high availability against host
crashes. In case ovsdb server crashes, it can restart when --monitor option
is specified.


>   2,  OVN nodes in a total of three processes, the South to the database,
> north to the database, northd process.
>
>But only in the OCF script to start and monitor the database
> process, there is no northd process.
>
>  How to ensure high reliability northd process?
>
>  My idea is OCF script at the same time start and monitor northd
> process, because I think these three processes is together.
>

I agree that the northd HA is not well addressed with current OCF script.
However, I am not sure it can be addressed by simply extending current OCF
scripts
In theory, the northd can run on another machine, accessing ovsdb-server
via the virtual IP.  May be this is more a packaging issue.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packets not forwarded to queues after hitting flows in OVS QoS

2016-12-07 Thread Santhosh R P
Hi Ben,

Thanks for the response. I have provided the flows dump here.

Host MAC Addr: 90:1b:0e:06:0e:c4
VM1 MAC Addr: 52:54:00:3e:2c:cf

When the flows are set as below, no QoS takes place. Bandwidth achieved is
~880mbps.

transl-ofctl part-flows test

ovs-ofctl add-flow test priority=2000,in_port=2,actions=set_queue:1,normal
ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal
transl-ofctl add-flow test priority = 10 actions = normal

*Before iperf test:*

[Root @ omi-student15 openvswitch] # transl-dpctl dump flows
recirc_id(0),in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:
00:5e:00:01:01),eth_type(0x0800),ipv4(frag=no), packets:2, bytes:160,
used:2.706s, actions:1 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=90:1b:0e:06:0e:c4),eth_type(0x0800),ipv4(frag=no),
packets:3, bytes:640, used:2.733s, actions:2


*During iperf test:*

[Root @ omi-student15 openvswitch] # transl-dpctl dump flows recirc_id(0)
*,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=52:54:00:3e:2c:cf),*eth_type(0x0800),ipv4(frag=no),
*packets:12516, *bytes:846036, used:0.000s, flags:SP., actions:3
recirc_id(0),*in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:00:5e:00:01:01)*
,eth_type(0x0800),ipv4(frag=no),* packets:5413,* bytes:232047658,
used:0.001s, flags:SP., actions:1 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.235,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),skb_priority(0)
*,in_port(3),eth(src=52:54:00:3e:2c:cf,dst=90:1b:0e:06:0e:c4)*
,eth_type(0x0800),ipv4(frag=no), *packets:5412, *bytes:232048096,
used:0.001s, flags:SP., actions:set(skb_priority(0x10002)),2
recirc_id(0),in_port(1),eth(src=00:15:2c:fa:bb:80,dst=ff:
ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=134.60.30.2,tip=134.60.30.220,op=1/0xff),
packets:0, bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.103,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=90:1b:0e:06:0e:c4),eth_type(0x0800),ipv4(frag=no),
packets:12520, bytes:847246, used:0.000s, flags:SP., actions:2

[Root @ omi-student15 openvswitch] # transl-dpctl dump flows recirc_id(0)
*,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=52:54:00:3e:2c:cf)*,eth_type(0x0800),ipv4(frag=no),
*packets:38918, *bytes:2611380, used:0.001s, flags:SP., actions:3
recirc_id(0)*,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=00:00:5e:00:01:01)*
,eth_type(0x0800),ipv4(frag=no),* packets:12524, *bytes:534365224,
used:0.001s, flags:SP., actions:1 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.235,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),skb_priority(0)
*,in_port(3),eth(src=52:54:00:3e:2c:cf,dst=90:1b:0e:06:0e:c4)*
,eth_type(0x0800),ipv4(frag=no),* packets:12521,* bytes:534365726,
used:0.001s, flags:SP., actions:set(skb_priority(0x10002)),2
recirc_id(0),in_port(1),eth(src=00:15:2c:fa:bb:80,dst=ff:
ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=134.60.30.2,tip=134.60.30.220,op=1/0xff),
packets:0, bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.91,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.103,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=ff:ff:ff:ff:ff:ff),eth_type(
0x0806),arp(sip=134.60.30.2,tip=134.60.30.90,op=1/0xff), packets:0,
bytes:0, used:never, actions:2,3,4 recirc_id(0),in_port(1),eth(
src=00:15:2c:fa:bb:80,dst=90:1b:0e:06:0e:c4),eth_type(0x0800),ipv4(frag=no),
packets:38924, bytes:2612954, used:0.001s, flags:SP., actions:2

[Root @ omi-student15 openvswitch] # transl-dpctl dump flows recirc_id(0)
*,in_port(2),eth(src=90:1b:0e:06:0e:c4,dst=52:54:00:3e:2c:cf)*,eth_type(
0x0800),ipv4(frag=no),* packets:66352,* bytes:4448220, used:0.000s,
flags:SP., actions:3 recirc_id(0),in_port(1),eth(
src=e4:8d:8c:8

Re: [ovs-discuss] Bridge does not specify output ; ignoring? (OVS-DPDK Ubuntu)

2016-12-07 Thread Aaron Conole
"Stokes, Ian"  writes:

>> The mirror-related errors in the log.  ovs-tcpdump creates a mirror.
>
> Are there any other errors in the logs? (with a view to figuring out
> why traffic isn't reaching the VMs). Feel free to attach them if
> you're unsure.
>
> Can you provide some more detail with regards to your setup?
>
> OVS release/commit, DPDK release, QEMU version etc.
>
>> 
>> On Tue, Dec 06, 2016 at 03:05:32PM -0500, Lax Clarke wrote:
>> > Pardon?  What's caused by ovs-tcpdump??
>> >
>> > On Tue, Dec 6, 2016 at 2:17 PM, Ben Pfaff  wrote:
>> >
>> > > Oh, it's probably caused by ovs-tcpdump, now that I think about it.
>> > >
>> > > On Tue, Dec 06, 2016 at 01:20:19PM -0500, Lax Clarke wrote:
>> > > > I do not think we did.
>> > > >
>> > > > Only config we did:
>> > > >
>> > > > # Subscribers DPDK-based Bridge
>> > > > ovs-vsctl add-br flat-br-0 -- set bridge flat-br-0
>> > > > datapath_type=netdev ovs-vsctl add-port flat-br-0 dpdk0 -- set
>> > > > Interface dpdk0 type=dpdk ovs-vsctl add-port flat-br-0
>> > > > stack-1-pts-1-subscribers-1 -- set Interface
>> > > > stack-1-pts-1-subscribers-1 type=dpdkvhostuser
>> > > >
>> > > > # Internet DPDK-based Bridge
>> > > > ovs-vsctl add-br flat-br-1 -- set bridge flat-br-1
>> > > > datapath_type=netdev ovs-vsctl add-port flat-br-1 dpdk1 -- set
>> > > > Interface dpdk1 type=dpdk ovs-vsctl add-port flat-br-1
>> > > > stack-1-pts-1-internet-1 -- set Interface
>> > > > stack-1-pts-1-internet-1 type=dpdkvhostuser
>> > > >
>> > > > # Bring it up (not sure if needed) ip link set dev flat-br-0 up ip
>> > > > link set dev flat-br-1 up
>> > > >
>> > > > # Enable multi-queue on host side, 4 queues ovs-vsctl set
>> > > > interface dpdk0 options:n_rxq=2 ovs-vsctl set interface dpdk1
>> > > > options:n_rxq=2
>
> How are you launching the VMs (i.e. QEMU, libvirt)?
> What parameters are you using to launch & setup multi-queue for the VM 
> interfaces?

+1 - I'd like to see this information, as well.

Can you also do an `ls -lah $OVS_RUNDIR` where OVS_RUNDIR= the path to
the runtime directory of the vswitchd (usually either
/var/run/openvswitch, or /usr/var/run/openvswitch, or
/usr/local/var/run/openvswitch).

and a `ps aux | grep qemu`

There may be permissions issues when using vhost-user server mode
(possibly even selinux issues with the unix domain socket).

> Thanks
> Ian
>
. . .

>> > > > > > The error I see in ovs switch log says:
>> > > > > > 2016-12-06T17:14:41.170Z|00092|bridge|ERR|bridge flat-br-0:
>> > > > > > mirror
>> > > > > > m_flat-br-0 does not specify output; ignoring
>> > > > > > 2016-12-06T17:14:41.170Z|00093|bridge|ERR|bridge flat-br-0:
>> > > > > > mirror
>> > > > > > m_flat-br-0 does not specify output; ignoring
>> > > > > > 2016-12-06T17:14:41.170Z|00094|bridge|ERR|bridge flat-br-0:
>> > > > > > mirror
>> > > > > > m_flat-br-0 does not specify output; ignoring
>> > > > > > 2016-12-06T17:14:41.171Z|00095|bridge|ERR|bridge flat-br-1:
>> > > > > > mirror
>> > > > > > m_flat-br-1 does not specify output; ignoring
>> > > > > > 2016-12-06T17:14:41.171Z|00096|bridge|ERR|bridge flat-br-1:
>> > > > > > mirror
>> > > > > > m_flat-br-1 does not specify output; ignoring
>> > > > > > 2016-12-06T17:14:41.171Z|00097|bridge|ERR|bridge flat-br-1:
>> > > > > > mirror
>> > > > > > m_flat-br-1 does not specify output; ignoring

Sorry for jumping into this late;  I agree with Ben, this is likely
a mirror error caused by ovs-tcpdump.  I never tried it on the bridge
port, so I'll do some tests and see if I can recreate it.  In the
meantime, if you didn't create those mirror ports, feel free to remove
them.

I doubt it is related to your connectivity issue, though.

>> > > > > > What does this mean?
>> > > > > > I browsed code here
>> > > > > > https://github.com/osrg/openvswitch/blob/master/
>> > > vswitchd/bridge.c#L3641
>> > > > > > It seems to suggest that I need either an output port or an
>> > > > > > output
>> > > vlan.
>> > > > >
>> > > > > How did you configure mirroring?
>> > > > >
>> > >
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packets not forwarded to queues after hitting flows in OVS QoS

2016-12-07 Thread Ben Pfaff
On Wed, Dec 07, 2016 at 12:20:24PM +0100, Santhosh R P wrote:
> Hi,
> 
> The physical port (enp0s25) of the host and the tap devices (vnet0 and
> vnet1) of two hosted VMs are added to an OVS bridge named "test".
> 
>ovs-vsctl show
>Bridge test
>Port test
>Interface test
>type: internal
>Port "vnet0"
>Interface "vnet0"
>Port "enp0s25"
>Interface "enp0s25"
>Port "vnet1"
>Interface "vnet1"
>ovs_version: "2.5.1"
> 
> I want to apply traffic shaping for the traffic leaving from these two VMs
> to an external machine over the internet. So, I created qos for enp0s25
> with two queues using,
> 
> ovs-vsctl  set port enp0s25 qos=@newqos -- --id=@newqos create qos
> type=linux-htb queues:1=@vnet0queue queues:2=@vnet1queue --
> --id=@vnet0queue create queue other-config:max-rate=1 --
> --id=@vnet1queue create queue other-config:max-rate=2
> 
> 
> QoS and queues are created as expected (both in OVS and TC).
> 
> 
> ovs-vsctl list qos
> 
>_uuid   : 9e8b295d-13cf-4c18-9df6-f75809bb1184
> 
>external_ids: {}
> 
>other_config: {}
> 
>queues  : {1=c9f1e852-9b29-4736-8b83-8ce1562be857,
> 2=3c61b22c-87a4-4380-96b5-822000d2bc94}
> 
>type: linux-htb
> 
> 
> ovs-ofctl -O OpenFlow13 queue-stats test
> 
>OFPST_QUEUE reply (OF1.3) (xid=0x2): 3 queues
> 
>port 1 queue 0: bytes=369596, pkts=1119, errors=0, duration=4294961079.
> 3744967296s
> 
>port 1 queue 1: bytes=108, pkts=2, errors=0, duration=4294961079.
> 3744967296s
> 
>port 1 queue 2: bytes=108, pkts=2, errors=0, duration=4294961079.
> 3744967296s
> 
> Then, as shown in the FAQ, I created flows to direct the traffic from VMs
> to these queues.
> 
> 
> ovs-ofctl add-flow test priority=2000,in_port=2,actions=set_queue:1,normal
> ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal
> 
> ovs-ofctl add-flow test priority=10,actions=normal
> 
> 
> The ofport of vnet0 and vnet1 are verified to 2 and 3 respectively
> (using: ovs-vsctl
> -- --columns=name,ofport list Interface) and the flows are configured
> correctly,
> 
> When I use iPerf3 from VM1 to an external IP, I could see that packet count
> is increasing in Port 2 (using: ovs-ofctl dump-ports test).
> 
> 
> The flows before the iPerf test looks like this,
> 
> 
> ovs-ofctl dump-flows test
> 
> NXST_FLOW reply (xid=0x4):
> 
> cookie=0x0, duration=7.439s, table=0, n_packets=0, n_bytes=0,
> idle_age=7, priority=2000,in_port=2 actions=set_queue:1,NORMAL
> 
> cookie=0x0, duration=7.433s, table=0, n_packets=0, n_bytes=0,
> idle_age=7, priority=2000,in_port=3 actions=set_queue:2,NORMAL
> 
> cookie=0x0, duration=6.263s, table=0, n_packets=55, n_bytes=17232,
> idle_age=0, priority=10 actions=NORMAL
> 
> 
> And after the iPerf test, looks like this (with packet counts increased in
> flow 1 and 3),
> 
> 
> ovs-ofctl dump-flows test
> 
> NXST_FLOW reply (xid=0x4):
> 
> cookie=0x0, duration=628.046s, table=0, n_packets=26200,
> n_bytes=1104961741, idle_age=2, priority=2000,in_port=2
> actions=set_queue:1,NORMAL
> 
> cookie=0x0, duration=628.040s, table=0, n_packets=11, n_bytes=594,
> idle_age=14, priority=2000,in_port=3 actions=set_queue:2,NORMAL
> 
> cookie=0x0, duration=626.870s, table=0, n_packets=203437,
> n_bytes=1118596590, idle_age=0, priority=10 actions=NORMAL (packet count
> here increased along with first flow in same magnitude)
> 
> But, the packets are not reaching the corresponding queues (verified in
> both OVS and TC), and no traffic shaping takes place. The default queue 0
> gets filled instead.
> 
> 
> On the other hand, the flows are hit, queues are filled and traffic shaping
> takes place if I use the dest IP address of iperf test as filter in the
> flow like this,
> 
> 
> ovs-ofctl add-flow test priority=2000,ip,nw_dst=134.60.64.159,
> actions=set_queue:1,normal
> ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal
> 
> ovs-ofctl add-flow test priority=10,actions=normal
> 
> 
> Using the src IP of VM1 or the port for VM1, hits the flows, but packets
> are not forwarded to the queue.
> 
> If this has something to do with, I used IP Masquerading in the host with
> private IPs for both VMs.
> 
> In ovs-vswitchd.log, there is no information other than added and deleted
> flows.
> 
> 
> What am I doing wrong here? Would be grateful to anyone who can help.

What's showing up in the kernel flows as displayed by "ovs-dpctl
dump-flows" while the test is running?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Openvswitch flow(or rule) to implement CAPTIVE-PORTAL (or HTTP redirect)

2016-12-07 Thread Guru Shetty
On 6 December 2016 at 20:01, Joo Yong-Seok  wrote:

> Hello,
>
> Is there any good example for openvswitch flow/rules for captive-portal?
> Which means,
>
> - We should perform DNAT (with captive-portal web server IP) for outbound
> HTTP traffic
> - When responses are back, we should do proper NAT again.
>
> The issue, is, http packets' DIP from client, it's not fixed. It can be
> google, yahoo, facebook and anything. But, all of http packet should be
> redirected to specific web-server and response should be received on client
> side properly.
>

OVS flows can only look up to L4. So if you know the L4 ports (ex: 80) that
you want redirected to your captive-portal, it should be possible.


>
> Thanks in advance.
>
> Best regards,
>
> - yongseok
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Packets not forwarded to queues after hitting flows in OVS QoS

2016-12-07 Thread Santhosh R P
Hi,

The physical port (enp0s25) of the host and the tap devices (vnet0 and
vnet1) of two hosted VMs are added to an OVS bridge named "test".

   ovs-vsctl show
   Bridge test
   Port test
   Interface test
   type: internal
   Port "vnet0"
   Interface "vnet0"
   Port "enp0s25"
   Interface "enp0s25"
   Port "vnet1"
   Interface "vnet1"
   ovs_version: "2.5.1"

I want to apply traffic shaping for the traffic leaving from these two VMs
to an external machine over the internet. So, I created qos for enp0s25
with two queues using,

ovs-vsctl  set port enp0s25 qos=@newqos -- --id=@newqos create qos
type=linux-htb queues:1=@vnet0queue queues:2=@vnet1queue --
--id=@vnet0queue create queue other-config:max-rate=1 --
--id=@vnet1queue create queue other-config:max-rate=2


QoS and queues are created as expected (both in OVS and TC).


ovs-vsctl list qos

   _uuid   : 9e8b295d-13cf-4c18-9df6-f75809bb1184

   external_ids: {}

   other_config: {}

   queues  : {1=c9f1e852-9b29-4736-8b83-8ce1562be857,
2=3c61b22c-87a4-4380-96b5-822000d2bc94}

   type: linux-htb


ovs-ofctl -O OpenFlow13 queue-stats test

   OFPST_QUEUE reply (OF1.3) (xid=0x2): 3 queues

   port 1 queue 0: bytes=369596, pkts=1119, errors=0, duration=4294961079.
3744967296s

   port 1 queue 1: bytes=108, pkts=2, errors=0, duration=4294961079.
3744967296s

   port 1 queue 2: bytes=108, pkts=2, errors=0, duration=4294961079.
3744967296s

Then, as shown in the FAQ, I created flows to direct the traffic from VMs
to these queues.


ovs-ofctl add-flow test priority=2000,in_port=2,actions=set_queue:1,normal
ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal

ovs-ofctl add-flow test priority=10,actions=normal


The ofport of vnet0 and vnet1 are verified to 2 and 3 respectively
(using: ovs-vsctl
-- --columns=name,ofport list Interface) and the flows are configured
correctly,

When I use iPerf3 from VM1 to an external IP, I could see that packet count
is increasing in Port 2 (using: ovs-ofctl dump-ports test).


The flows before the iPerf test looks like this,


ovs-ofctl dump-flows test

NXST_FLOW reply (xid=0x4):

cookie=0x0, duration=7.439s, table=0, n_packets=0, n_bytes=0,
idle_age=7, priority=2000,in_port=2 actions=set_queue:1,NORMAL

cookie=0x0, duration=7.433s, table=0, n_packets=0, n_bytes=0,
idle_age=7, priority=2000,in_port=3 actions=set_queue:2,NORMAL

cookie=0x0, duration=6.263s, table=0, n_packets=55, n_bytes=17232,
idle_age=0, priority=10 actions=NORMAL


And after the iPerf test, looks like this (with packet counts increased in
flow 1 and 3),


ovs-ofctl dump-flows test

NXST_FLOW reply (xid=0x4):

cookie=0x0, duration=628.046s, table=0, n_packets=26200,
n_bytes=1104961741, idle_age=2, priority=2000,in_port=2
actions=set_queue:1,NORMAL

cookie=0x0, duration=628.040s, table=0, n_packets=11, n_bytes=594,
idle_age=14, priority=2000,in_port=3 actions=set_queue:2,NORMAL

cookie=0x0, duration=626.870s, table=0, n_packets=203437,
n_bytes=1118596590, idle_age=0, priority=10 actions=NORMAL (packet count
here increased along with first flow in same magnitude)

But, the packets are not reaching the corresponding queues (verified in
both OVS and TC), and no traffic shaping takes place. The default queue 0
gets filled instead.


On the other hand, the flows are hit, queues are filled and traffic shaping
takes place if I use the dest IP address of iperf test as filter in the
flow like this,


ovs-ofctl add-flow test priority=2000,ip,nw_dst=134.60.64.159,
actions=set_queue:1,normal
ovs-ofctl add-flow test priority=2000,in_port=3,actions=set_queue:2,normal

ovs-ofctl add-flow test priority=10,actions=normal


Using the src IP of VM1 or the port for VM1, hits the flows, but packets
are not forwarded to the queue.

If this has something to do with, I used IP Masquerading in the host with
private IPs for both VMs.

In ovs-vswitchd.log, there is no information other than added and deleted
flows.


What am I doing wrong here? Would be grateful to anyone who can help.


Thank you for your attention,

Santhosh.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss