Re: [ovs-discuss] Issue with connection tracking for packets modified in pipeline

2017-06-01 Thread Joe Stringer
On 1 June 2017 at 05:23, Aswin S  wrote:
> Hi,
> When SG is implemented using conntrack rules , TCP connection via FIP
> between vms in the same compute is failing

What is SG?

Is FIP "floating IP"?

> In my topology I have two vm on the same compute both having floating ip
> associated with it and the fip translation is done using openflow rules.
>
> When using vm internal network ip it is working fine and I can ssh to the
> other vm.
>
> The conntrack event logs are as follows (Src ip 10.100.5.5 Dest Ip
> 10.100.5.12
>[NEW] tcp  6 120 SYN_SENT src=10.100.5.5 dst=10.100.5.12 sport=43724
> dport=22 [UNREPLIED] src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724
> zone=5001
>  [UPDATE] tcp  6 60 SYN_RECV src=10.100.5.5 dst=10.100.5.12 sport=43724
> dport=22 src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724 zone=5001
>  [UPDATE] tcp  6 432000 ESTABLISHED src=10.100.5.5 dst=10.100.5.12
> sport=43724 dport=22 src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724
> [ASSURED] zone=5001
>
>
> But when I use FIP(the TCP packets are marked as Invalid and dropped.
> The SYN reaches the second vm which sends back the SYN ack and the status of
> conntrack entry is updated at the destination. Though the SYN-ACK reaches
> vm1 the conntrack state still remain UNREPLIED and the ack packet send to
> vm2 is marked as invalid and dropped. In the pipeline the packet is
> submitted to the conntrack both at egress and ingress side. The packet is
> submitted to conntrack after the fip modification.
>
> The conntrack event logs (Vm1 10.100.5.5, 192.168.56.29, Vm2 10.100.5.12,
> 192.168.56.23)
>
>  [NEW] tcp  6 120 SYN_SENT src=10.100.5.12 dst=192.168.56.29
> sport=58218 dport=22 [UNREPLIED] src=192.168.56.29 dst=10.100.5.12 sport=22
> dport=58218 zone=5001
> [NEW] tcp  6 120 SYN_SENT src=192.168.56.23 dst=10.100.5.5
> sport=58218 dport=22 [UNREPLIED] src=10.100.5.5 dst=192.168.56.23 sport=22
> dport=58218 zone=5001
>  [UPDATE] tcp  6 60 SYN_RECV src=192.168.56.23 dst=10.100.5.5
> sport=58218 dport=22 src=10.100.5.5 dst=192.168.56.23 sport=22 dport=58218
> zone=5001

It looks like you're modifying the destination address on traffic from
VM1->VM2 before submitting to conntrack and modifying the source
address on traffic from VM2->VM1 before submitting to conntrack, which
means that conntrack is not seeing bidirection traffic between two
physical IPs, nor is it seeing bidirectional traffic between floating
IPs.. rather, it is seeing two unidirectional connections between
either VM1's physical IP and VM2's FIP, or VM1's FIP and VM2's
physical IP. The Linux connection tracker, when it sees unidirectional
SYNACK, will classify it as invalid, leading to your drop.

> The issue is not limited to TCP, if try with ICMP with FIP, the ping packet
> from vm1 to vm2 will be always new state in the connection tracker. This
> works(both TCP and ICMP) fine if the vm are two compute nodes. So is this an
> issue when a modified packet in the pipeline is  submitted to connection
> tracker? Does netfilter/ovs conntrack check for any other field than src ip/
> port dest ip/port for marking a packet as reply pakcet?

It deals with the actual packet contents when you execute the ct() action.

> I am using ovs 2.7.0. I have reported an issue a while ago[1] which still
> exsits and this seems to be related
>
> [1]https://mail.openvswitch.org/pipermail/ovs-discuss/2016-December/043228.html

It looks like you're seeing events corresponding to modified packets
according to your output above, so I don't see the relation to this
other thread.

Cheers,
Joe
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] vxlan offload via dpif

2017-06-01 Thread Joe Stringer
On 1 June 2017 at 01:19, Santhosh Alladi  wrote:
> Hi Joe,
>
> Thank you for your reply.
> In our solution, we are not using the linux vxlan driver, rather we are 
> having our own vxlan driver in our accelerator. So, for an accelerator which 
> is connected via dpif, how can we get the tunnel information for 
> decapsulating the packets?
>
> Also, can you brief me how will the vxlan device get the tunnel information 
> to decap the packet if the COLLECT_METADATA mode is enabled?

Based on what I see in the Linux implementation, I'd expect that your
vxlan driver's receive path should receive encapsulated vxlan packets
so should have direct access to the relevant information. It is then
responsible for extracting the metadata, decapsulating the packet,
then providing it to the OVS processing path in the form it expects.
If you're plugging into regular OVS kernel module, this should be
metadata_dst attached to skb. By the time it gets up to the dpif
layer, it should appear as a list of ovs_tunnel_key_attrs in the
netlink-formatted key.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Help needed to mirror a port

2017-06-01 Thread LuisMi Cruz
Thanks, buy i want the traffic copied to other server

On 1 Jun 2017 21:50, "Aaron Conole"  wrote:

LuisMi Cruz  writes:

> Hello all,
>
> This message might a very common question I think but I wouldn't be here
if I am not fully desperate.
>
> I am trying to do a simple port mirror and it is not working.
>
> The scenario is:
> Bridge: xapi3
>
> Source port  vif208.1
>
> Destination port vif210.2
>
> # ovs-vsctl --version
>
> ovs-vsctl (Open vSwitch) 2.3.2
>
> Compiled Feb 16 2017 14:07:50
>
> DB Schema 7.6.2
>
> I can't change the version, just in case someone is thinking on a
potential upgrade.
>
> I executed the next command:
>
> ovs-vsctl -- set Bridge xapi3 mirrors=@m \
>
>   -- --id=@vif208.1 get Port vif208.1 \
>
>   -- --id=@vif210.2 get Port vif210.2 \
>
>   -- --id=@m create Mirror name=mymirror
select-dst-port=@vif208.1
> select-src-port=@vif208.1 output-port=@vif210.2
>

I might also suggest using ovs-tcpdump, which will do the mirror and run
tcpdump on the mirrored interface for you.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Help needed to mirror a port

2017-06-01 Thread Aaron Conole
LuisMi Cruz  writes:

> Hello all,
>
> This message might a very common question I think but I wouldn't be here if I 
> am not fully desperate.
>
> I am trying to do a simple port mirror and it is not working.
>
> The scenario is:
> Bridge: xapi3
>
> Source port  vif208.1
>
> Destination port vif210.2
>
> # ovs-vsctl --version
>
> ovs-vsctl (Open vSwitch) 2.3.2
>
> Compiled Feb 16 2017 14:07:50
>
> DB Schema 7.6.2
>
> I can't change the version, just in case someone is thinking on a potential 
> upgrade.
>
> I executed the next command:
>
> ovs-vsctl -- set Bridge xapi3 mirrors=@m \
>
>   -- --id=@vif208.1 get Port vif208.1 \
>
>   -- --id=@vif210.2 get Port vif210.2 \
>
>   -- --id=@m create Mirror name=mymirror select-dst-port=@vif208.1
> select-src-port=@vif208.1 output-port=@vif210.2
>

I might also suggest using ovs-tcpdump, which will do the mirror and run
tcpdump on the mirrored interface for you.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] LISP Tunneling

2017-06-01 Thread Ben Pfaff
On Wed, May 17, 2017 at 02:52:39PM +0200, Ashish Kurian wrote:
> Dear OVS folks,
> 
> I have some doubts regarding LISP tunneling. I have a setup where I am
> getting incoming LISP tunneled packets into my OVS. What I want to do is to
> check the inner IP destination address and based on that I need to forward
> the packets. Let us say that there are only two possibilities for inner IP
> addresses : 10.0.0.1 and 10.0.0.2.
> 
> If the inner IP address is 10.0.0.1, then I want the packet to be forwarded
> to an interface (say eth1 and port number 1) without doing any change to
> the tunneled packet. If the inner IP address is 10.0.0.2, then I want the
> packet to be forwarded to the another interface (say eth2 and port number
> 2) with only the inner contents of the tunnel packets.
> 
> I am thinking of the following flow entries to do the mentioned rules, but
> correct me if I am wrong.
> 
> *
>  table=0,dl_type=0x0800,nw_dst=10.0.0.2,actions=mod_dl_dst=10:0:0:2,output:2*
> 
>-  Will this flow check for the inner destination IP of the
>tunneled packet and put only the metadata in port eth2?

I don't know what it means to "put only the metadata" in a port.  What
does it mean?

>   *table=0,dl_type=0x0800,action=NORMAL*
> 
> 
>- Will this flow take care of all other flows?

Yes.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Issue with connection tracking for packets modified in pipeline

2017-06-01 Thread Aswin S
Hi,
When SG is implemented using conntrack rules , TCP connection via FIP
between vms in the same compute is failing

In my topology I have two vm on the same compute both having floating ip
associated with it and the fip translation is done using openflow rules.

When using vm internal network ip it is working fine and I can ssh to the
other vm.

The conntrack event logs are as follows (Src ip 10.100.5.5 Dest Ip
10.100.5.12
   [NEW] tcp  6 120 SYN_SENT src=10.100.5.5 dst=10.100.5.12 sport=43724
dport=22 [UNREPLIED] src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724
zone=5001
 [UPDATE] tcp  6 60 SYN_RECV src=10.100.5.5 dst=10.100.5.12 sport=43724
dport=22 src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724 zone=5001
 [UPDATE] tcp  6 432000 ESTABLISHED src=10.100.5.5 dst=10.100.5.12
sport=43724 dport=22 src=10.100.5.12 dst=10.100.5.5 sport=22 dport=43724
[ASSURED] zone=5001


But when I use FIP(the TCP packets are marked as Invalid and dropped.

The SYN reaches the second vm which sends back the SYN ack and the status
of conntrack entry is updated at the destination. Though the SYN-ACK
reaches vm1 the conntrack state still remain UNREPLIED and the ack packet
send to vm2 is marked as invalid and dropped. In the pipeline the packet is
submitted to the conntrack both at egress and ingress side. The packet is
submitted to conntrack after the fip modification.

The conntrack event logs (Vm1 10.100.5.5, 192.168.56.29, Vm2 10.100.5.12,
192.168.56.23)

 [NEW] tcp  6 120 SYN_SENT src=10.100.5.12 dst=192.168.56.29
sport=58218 dport=22 [UNREPLIED] src=192.168.56.29 dst=10.100.5.12 sport=22
dport=58218 zone=5001
[NEW] tcp  6 120 SYN_SENT src=192.168.56.23 dst=10.100.5.5
sport=58218 dport=22 [UNREPLIED] src=10.100.5.5 dst=192.168.56.23 sport=22
dport=58218 zone=5001
 [UPDATE] tcp  6 60 SYN_RECV src=192.168.56.23 dst=10.100.5.5
sport=58218 dport=22 src=10.100.5.5 dst=192.168.56.23 sport=22 dport=58218
zone=5001


The issue is not limited to TCP, if try with ICMP with FIP, the ping packet
from vm1 to vm2 will be always new state in the connection tracker. This
works(both TCP and ICMP) fine if the vm are two compute nodes. So is this
an issue when a modified packet in the pipeline is  submitted to connection
tracker? Does netfilter/ovs conntrack check for any other field than src
ip/ port dest ip/port for marking a packet as reply pakcet?

I am using ovs 2.7.0. I have reported an issue a while ago[1] which still
exsits and this seems to be related

[1]
https://mail.openvswitch.org/pipermail/ovs-discuss/2016-December/043228.html


Thanks
Aswin
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] vxlan offload via dpif

2017-06-01 Thread Santhosh Alladi
Hi Joe,

Thank you for your reply.
In our solution, we are not using the linux vxlan driver, rather we are having 
our own vxlan driver in our accelerator. So, for an accelerator which is 
connected via dpif, how can we get the tunnel information for decapsulating the 
packets?

Also, can you brief me how will the vxlan device get the tunnel information to 
decap the packet if the COLLECT_METADATA mode is enabled?

Regards,
Santhosh

-Original Message-
From: Joe Stringer [mailto:j...@ovn.org] 
Sent: Thursday, June 01, 2017 2:06 AM
To: Santhosh Alladi 
Cc: ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] vxlan offload via dpif

On 31 May 2017 at 06:27, Santhosh Alladi  wrote:
> Hi all,
>
>
>
> We are trying to configure our hardware accelerator using ovs via 
> dpif. We could achieve L2 forwarding using this setup.
>
> Now, we are trying to offload complete functionality of vxlan. In this 
> sense, how does vxlan processing take place in ovs-kernel. How can we 
> get the tunnel information to our hardware via dpif?

The Linux kernel provides flow-based tunneling by attaching "metadata_dst" to 
the packet.

For instance, when OVS kernel module wants to send a packet out a vxlan device, 
it attaches the metadata_dst to the skbuff and transmits on the vxlan device. 
The Linux stack then takes over in terms of encapsulating the packet with the 
provided metadata and performing a route lookup to determine the next 
(underlay) hop for the packet.

On receive side, the vxlan device must be set up in COLLECT_METADATA mode and 
this sets up a receiver on the UDP socket which can receive the packet, decap, 
and attach the tunnel metadata as a metadata_dst before calling the device 
receive path. Then the packet is received in a similar way to any OVS-attached 
device, but OVS will check whether metadata_dst is set prior to performing flow 
table lookup. If there is metadata_dst, this needs to be received into the flow 
key.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss