Re: [ovs-discuss] ovn duplcate nat items cause DVR fip can not access

2021-09-18 Thread Slawek Kaplonski
  | status 
> > | | DOWN   
> > |  | tags| 
> > | | trunk_details  
> > | | None   
> > |  | updated_at  | 2021-07-27T13:02:57Z
> > | |> 
> > +-
+-
> > -+
> > 
> > 
> > router 0a38aa37-6b88-4871-add0-dee277c0e54f
> > (neutron-18f83463-1f11-4905-ab44-92fd9364c8d4) (aka
> > cn-shanghai-on-prem-router) ...
> > --
> > 
> > nat 02d4cbf8-35e6-4842-8866-0331b025ca01
> > 
> > external ip: "10.120.25.133"
> > logical ip: "10.118.0.0/19"
> > type: "snat"
> > 
> > ...
> > 
> > 
> > and now the nbctl show info is:
> > 
> > 
> > 
> > router 0a38aa37-6b88-4871-add0-dee277c0e54f
> > (neutron-18f83463-1f11-4905-ab44-92fd9364c8d4) (aka
> > cn-shanghai-on-prem-router)> 
> > port lrp-53355fc7-002d-4a32-8ca9-f5f244c6fafd
> > 
> > mac: "fa:16:3e:be:04:68"
> > 
> > --
> > 
> > nat 02d4cbf8-35e6-4842-8866-0331b025ca01
> > 
> > external ip: "10.120.25.133"
> > logical ip: "10.118.0.0/19"
> > type: "snat"
> > 
> > nat 0a8bbdca-8af0-4f8f-a8c1-e2b76b091f8c
> > 
> > --
> > 
> > nat 8a28dcc8-34f2-4cca-b78f-d3b9ae9dee2e
> > 
> > external ip: "10.120.25.133"
> > logical ip: "10.118.0.0/19"
> > type: "snat"
> > 
> > as you can see,  10.120.25.133 :  10.118.0.0/19 its duplicate
> > 
> > 
> > thanks for your help
> > 
> > 
> > 
> > 张兵兵
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


-- 
Slawek Kaplonski
Principal Software Engineer
Red Hat

signature.asc
Description: This is a digitally signed message part.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Transparent vlan and flat neutron networks

2021-01-08 Thread Slawek Kaplonski
Hi,

I was today checking why vlan tagged packets can't reach the destination vm 
when flat network with transparent vlan is used.

It seems for me that in case of flat neutron network openflow rule created in 
table=0 by ovn is like:

 cookie=0x779c3409, duration=5186.275s, table=0, n_packets=316, n_bytes=37480, 
priority=100,in_port="patch-br-int-to",vlan_tci=0x/0x1000 actions=load:
0x4->NXM_NX_REG13[],load:0x3->NXM_NX_REG11[],load:0x2->NXM_NX_REG12[],load:
0x2->OXM_OF_METADATA[],load:0x1->NXM_NX_REG14[],resubmit(,8)


So if we have there network with vlan configured, it don't match that rule (and 
any other rule in the table=0) and such packet isn't processed by other rules.

When I added rule like:

 cookie=0x0, duration=450.544s, table=0, n_packets=70, n_bytes=7194, 
in_port="patch-br-int-to" actions=load:0x4->NXM_NX_REG13[],load:0x3-
>NXM_NX_REG11[],load:0x2->NXM_NX_REG12[],load:0x2->OXM_OF_METADATA[],load:0x1-
>NXM_NX_REG14[],resubmit(,8)

Connectivity was working fine.

@Ihar - do You think it should be changed in the ovn as a follow up to Your 
patch [1] to not match on "vlan_tci=0x/0x1000" for flat networks with vlan 
transparency=True?
Or @Daniel - maybe neutron-ovn driver should set something differently? I 
couldn't find today what is exactly the difference between flat and vlan 
networks 
in ovn mechanism driver.

[1] 
https://patchwork.ozlabs.org/project/ovn/patch/20201110023449.194642-1-ihrac...@redhat.com/

-- 
Slawek Kaplonski
Principal Software Engineer
Red Hat

signature.asc
Description: This is a digitally signed message part.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [OVN] Vlan transparency issue

2020-12-18 Thread Slawek Kaplonski
Hi,

Some time ago, Ihar made patch [1] which allows Neutron to use transparency vlan
networks with the OVN backend.
This works fine with most of the cases but we found out that it's not working in
case when port_security is enabled in Neutron (so conntrack is used) and there
is Neutron vlan network used. So effectively we have vlan in vlan in such case
comming to the compute node.
In that case when we ping vm1 -> vm2 icmp requests are properly delivered to vm2
but replies are dropped in br-int due to rule:

cookie=0x1a1c569, duration=1421.304s, table=15, n_packets=1007, n_bytes=102714, 
priority=65535,ct_state=+inv+trk,metadata=0x3 actions=drop

With Daniel we spent some time investigating why packets are treated as invalid
in conntrack and our understanding is that for some reason incomming packets
(icmp request from vm1 -> vm2) don't match rule:

cookie=0x93de161, duration=1524.892s, table=41, n_packets=0, n_bytes=0, 
priority=100,ip,metadata=0x3 actions=load:0x1->NXM_NX_XXREG0[96],resubmit(,42)
which corresponds to the logical flow:
uuid=0x093de161, table=1 (ls_out_pre_acl ), priority=100  , match=(ip), 
action=(reg0[0] = 1; next;)

and then it also don't match rules:
cookie=0x619723d4, duration=1559.433s, table=42, n_packets=0, n_bytes=0, 
priority=100,ip,reg0=0x1/0x1,metadata=0x3 
actions=ct(table=43,zone=NXM_NX_REG13[0..15])
Logical Flow:
uuid=0x619723d4, table=2 (ls_out_pre_stateful), priority=100  , 
match=(reg0[0] == 1), action=(ct_next;)

and:
cookie=0x835ca96b, duration=1576.728s, table=48, n_packets=0, n_bytes=0, 
priority=100,ip,reg0=0x2/0x2,metadata=0x3 
actions=ct(commit,zone=NXM_NX_REG13[0..15],exec(load:0->NXM_NX_CT_LABEL[0])),resubmit(,49)
Logical Flow:
uuid=0x835ca96b, table=8 (ls_out_stateful), priority=100  , 
match=(reg0[1] == 1), action=(ct_commit { ct_label.blocked = 0; }; next;)


As a result of that, conntrack entry isn't created so reply is treated as
invalid conntrack packet.

>From Neutron perspective such vlan tagged packets should be just passed to the
VM without any SG filtering but I don't know what is wrong or what we are
missing in that rules to do it.

[1] 
https://patchwork.ozlabs.org/project/ovn/patch/20201110023449.194642-1-ihrac...@redhat.com/

-- 
Slawek Kaplonski
Principal Software Engineer
Red Hat

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Vlan transparency in OVN

2020-06-04 Thread Slawek Kaplonski
Hi,

On Tue, Jun 02, 2020 at 05:37:37PM -0700, Ben Pfaff wrote:
> On Tue, Jun 02, 2020 at 01:25:05PM +0200, Slawek Kaplonski wrote:
> > Hi,
> > 
> > I work in OpenStack Neutron mostly. We have there extension called
> > "vlan_transparent". See [1] for details.
> > Basically it allows to send traffic with vlan tags directly to the VMs.
> > 
> > Recently I was testing if that extension will work with OVN backend used in
> > Neutron. And it seems that we have work to do to make it working.
> > From my test I found out that for each port I had rule like:
> > 
> > cookie=0x0, duration=17.580s, table=8, n_packets=6, n_bytes=444, 
> > idle_age=2, priority=100,metadata=0x2,vlan_tci=0x1000/0x1000 actions=drop
> > 
> > which was dropping those tagged packets. After removal of this rule traffic 
> > was
> > fine.
> > So we need to have some way to tell northd that it shouldn't match on 
> > vlan_tci
> > at all in case when neutron network has got vlan_transparency set to True.
> > 
> > From the discussion with Daniel Alvarez he told me that somehow we can try 
> > to
> > leverage such columns to request transparency (for example: parent_name=none
> > and tag_request=0). With this, northd can enforce transparency per port.
> > 
> > Another option could be to create an option in the “other_config” column in 
> > the
> > logical switch to have the setting per Neutron network
> > (other_config:vlan_transparent) While this seems more natural, it may break 
> > the
> > trunk/subport current feature.
> > 
> > What do You, as ovn developers thinks about that?
> > Is that maybe possible somehow to do currently in northd? Or is one of the
> > options given above doable and acceptable for You?
> 
> This might be a place to consider using QinQ (at least, until Neutron
> introduces QinQ transparency).

I'm not sure if I understand. For now Neutron don't supports QinQ - old RFE is
postponed currently [1].
And my original use case is related to the Neutron tenant networks which is
Geneve type. How QinQ can help with that?

> 

[1] https://bugs.launchpad.net/neutron/+bug/1705719

-- 
Slawek Kaplonski
Senior software engineer
Red Hat

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Vlan transparency in OVN

2020-06-02 Thread Slawek Kaplonski
Hi,

I work in OpenStack Neutron mostly. We have there extension called
"vlan_transparent". See [1] for details.
Basically it allows to send traffic with vlan tags directly to the VMs.

Recently I was testing if that extension will work with OVN backend used in
Neutron. And it seems that we have work to do to make it working.
From my test I found out that for each port I had rule like:

cookie=0x0, duration=17.580s, table=8, n_packets=6, n_bytes=444, 
idle_age=2, priority=100,metadata=0x2,vlan_tci=0x1000/0x1000 actions=drop

which was dropping those tagged packets. After removal of this rule traffic was
fine.
So we need to have some way to tell northd that it shouldn't match on vlan_tci
at all in case when neutron network has got vlan_transparency set to True.

From the discussion with Daniel Alvarez he told me that somehow we can try to
leverage such columns to request transparency (for example: parent_name=none
and tag_request=0). With this, northd can enforce transparency per port.

Another option could be to create an option in the “other_config” column in the
logical switch to have the setting per Neutron network
(other_config:vlan_transparent) While this seems more natural, it may break the
trunk/subport current feature.

What do You, as ovn developers thinks about that?
Is that maybe possible somehow to do currently in northd? Or is one of the
options given above doable and acceptable for You?

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html

-- 
Slawek Kaplonski
Senior software engineer
Red Hat

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss