Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-30 Thread Dumitru Ceara
On 6/29/21 6:04 PM, Paolo Valerio wrote:
> Dumitru Ceara  writes:
> 
>> On 6/25/21 2:01 PM, Paolo Valerio wrote:
>>> Dumitru Ceara  writes:
>>>
 On 6/21/21 12:06 PM, Paolo Valerio wrote:
> when a packet gets dnatted and then recirculated, it could be possible
> that it matches another rule that performs another nat action.
> The kernel datapath handles this situation turning to a no-op the
> second nat action, so natting only once the packet.  In the userspace
> datapath instead, when the ct action gets executed, an initial lookup
> of the translated packet fails to retrieve the connection related to
> the packet, leading to the creation of a new entry in ct for the src
> nat action with a subsequent failure of the connection establishment.
>
> with the following flows:
>
> table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
> table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
> table=0,priority=10,ip,actions=resubmit(,2)
> table=0,priority=10,arp,actions=NORMAL
> table=0,priority=0,actions=drop
> table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
> table=2,in_port=ovs-l0,actions=2
> table=2,in_port=ovs-r0,actions=1
>
> establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:
>
> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>
> with this patch applied the outcome is:
>
> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>
> The patch performs, for already natted packets, a lookup of the
> reverse key in order to retrieve the related entry, it also adds a
> test case that besides testing the scenario ensures that the other ct
> actions are executed.
>
> Reported-by: Dumitru Ceara 
> Signed-off-by: Paolo Valerio 
> ---

 Hi Paolo,

 Thanks for the patch!  I tested it and it works fine for OVN.  I have a
 few comments/questions below.

>>>
>>> Thanks for the test and for the review.
>>>
>  lib/conntrack.c |   30 +-
>  tests/system-traffic.at |   35 +++
>  2 files changed, 64 insertions(+), 1 deletion(-)
>
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 99198a601..7e8b16a3e 100644
> --- a/lib/conntrack.c
> +++ b/lib/conntrack.c
> @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
> *setmark,
>  }
>  }
>  
> +static void
> +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
> + long long now, bool natted)

 Nit: indentation.

>>>
>>> ACK
>>>
> +{
> +bool found;
> +
> +if (natted) {
> +/* if the packet has been already natted (e.g. a previous
> + * action took place), retrieve it performing a lookup of its
> + * reverse key. */
> +conn_key_reverse(>key);
> +}
> +
> +found = conn_key_lookup(ct, >key, ctx->hash,
> +now, >conn, >reply);
> +if (natted) {
> +if (OVS_LIKELY(found)) {
> +ctx->reply = !ctx->reply;
> +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
> +ctx->hash = conn_key_hash(>key, ct->hash_basis);
> +} else {
> +/* in case of failure restore the initial key. */
> +conn_key_reverse(>key);

 Can the lookup actually fail?  I mean, if the packet was natted, there
 must have been a connection on which it got natted.  Anyway, I think we
 should probably also increment a coverage counter.  I guess dropping
 such packets would be hard, right?

>>>
>>> I agree, it should not fail. If I'm not missing something, if it
>>> happens, it should be because there's been a problem somewhere else
>>> (e.g. a polluted ct_state value or more in general an unexpected
>>> scenario). For this reason, I think it's better not to drop it or even
>>> set it as invalid.
>>
>> I'm not sure, won't this create horrible to debug bugs when packets get
>> forwarded in an unexpected way?  Setting it as invalid isn't good enough
>> in my opinion because there might be flows later in the pipeline that
>> perform actions (other than drop) on packets with ct_state +inv.
>>
>> The problem I have (because I don't know the conntrack code) is that I
>> see no easy way to drop the packet.
>>
>>>
>>> Yes, the coverage counter gives more meaning to the else branch.
>>> 

Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-29 Thread Paolo Valerio
Dumitru Ceara  writes:

> On 6/25/21 2:01 PM, Paolo Valerio wrote:
>> Dumitru Ceara  writes:
>> 
>>> On 6/21/21 12:06 PM, Paolo Valerio wrote:
 when a packet gets dnatted and then recirculated, it could be possible
 that it matches another rule that performs another nat action.
 The kernel datapath handles this situation turning to a no-op the
 second nat action, so natting only once the packet.  In the userspace
 datapath instead, when the ct action gets executed, an initial lookup
 of the translated packet fails to retrieve the connection related to
 the packet, leading to the creation of a new entry in ct for the src
 nat action with a subsequent failure of the connection establishment.

 with the following flows:

 table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
 table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
 table=0,priority=10,ip,actions=resubmit(,2)
 table=0,priority=10,arp,actions=NORMAL
 table=0,priority=0,actions=drop
 table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
 table=2,in_port=ovs-l0,actions=2
 table=2,in_port=ovs-r0,actions=1

 establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:

 tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
 tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

 with this patch applied the outcome is:

 tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

 The patch performs, for already natted packets, a lookup of the
 reverse key in order to retrieve the related entry, it also adds a
 test case that besides testing the scenario ensures that the other ct
 actions are executed.

 Reported-by: Dumitru Ceara 
 Signed-off-by: Paolo Valerio 
 ---
>>>
>>> Hi Paolo,
>>>
>>> Thanks for the patch!  I tested it and it works fine for OVN.  I have a
>>> few comments/questions below.
>>>
>> 
>> Thanks for the test and for the review.
>> 
  lib/conntrack.c |   30 +-
  tests/system-traffic.at |   35 +++
  2 files changed, 64 insertions(+), 1 deletion(-)

 diff --git a/lib/conntrack.c b/lib/conntrack.c
 index 99198a601..7e8b16a3e 100644
 --- a/lib/conntrack.c
 +++ b/lib/conntrack.c
 @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
 *setmark,
  }
  }
  
 +static void
 +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
 + long long now, bool natted)
>>>
>>> Nit: indentation.
>>>
>> 
>> ACK
>> 
 +{
 +bool found;
 +
 +if (natted) {
 +/* if the packet has been already natted (e.g. a previous
 + * action took place), retrieve it performing a lookup of its
 + * reverse key. */
 +conn_key_reverse(>key);
 +}
 +
 +found = conn_key_lookup(ct, >key, ctx->hash,
 +now, >conn, >reply);
 +if (natted) {
 +if (OVS_LIKELY(found)) {
 +ctx->reply = !ctx->reply;
 +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
 +ctx->hash = conn_key_hash(>key, ct->hash_basis);
 +} else {
 +/* in case of failure restore the initial key. */
 +conn_key_reverse(>key);
>>>
>>> Can the lookup actually fail?  I mean, if the packet was natted, there
>>> must have been a connection on which it got natted.  Anyway, I think we
>>> should probably also increment a coverage counter.  I guess dropping
>>> such packets would be hard, right?
>>>
>> 
>> I agree, it should not fail. If I'm not missing something, if it
>> happens, it should be because there's been a problem somewhere else
>> (e.g. a polluted ct_state value or more in general an unexpected
>> scenario). For this reason, I think it's better not to drop it or even
>> set it as invalid.
>
> I'm not sure, won't this create horrible to debug bugs when packets get
> forwarded in an unexpected way?  Setting it as invalid isn't good enough
> in my opinion because there might be flows later in the pipeline that
> perform actions (other than drop) on packets with ct_state +inv.
>
> The problem I have (because I don't know the conntrack code) is that I
> see no easy way to drop the packet.
>
>> 
>> Yes, the coverage counter gives more meaning to the else branch.
>> Alternatively, we could probably even remove it. I would leave the NULL
>> check or equivalent.
>> 
>> I have no strong preference.
>> WDYT?
>> 
>
> I would prefer a 

Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-25 Thread Paolo Valerio
Dumitru Ceara  writes:

> On 6/25/21 2:01 PM, Paolo Valerio wrote:
>> Dumitru Ceara  writes:
>> 
>>> On 6/21/21 12:06 PM, Paolo Valerio wrote:
 when a packet gets dnatted and then recirculated, it could be possible
 that it matches another rule that performs another nat action.
 The kernel datapath handles this situation turning to a no-op the
 second nat action, so natting only once the packet.  In the userspace
 datapath instead, when the ct action gets executed, an initial lookup
 of the translated packet fails to retrieve the connection related to
 the packet, leading to the creation of a new entry in ct for the src
 nat action with a subsequent failure of the connection establishment.

 with the following flows:

 table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
 table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
 table=0,priority=10,ip,actions=resubmit(,2)
 table=0,priority=10,arp,actions=NORMAL
 table=0,priority=0,actions=drop
 table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
 table=2,in_port=ovs-l0,actions=2
 table=2,in_port=ovs-r0,actions=1

 establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:

 tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
 tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

 with this patch applied the outcome is:

 tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

 The patch performs, for already natted packets, a lookup of the
 reverse key in order to retrieve the related entry, it also adds a
 test case that besides testing the scenario ensures that the other ct
 actions are executed.

 Reported-by: Dumitru Ceara 
 Signed-off-by: Paolo Valerio 
 ---
>>>
>>> Hi Paolo,
>>>
>>> Thanks for the patch!  I tested it and it works fine for OVN.  I have a
>>> few comments/questions below.
>>>
>> 
>> Thanks for the test and for the review.
>> 
  lib/conntrack.c |   30 +-
  tests/system-traffic.at |   35 +++
  2 files changed, 64 insertions(+), 1 deletion(-)

 diff --git a/lib/conntrack.c b/lib/conntrack.c
 index 99198a601..7e8b16a3e 100644
 --- a/lib/conntrack.c
 +++ b/lib/conntrack.c
 @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
 *setmark,
  }
  }
  
 +static void
 +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
 + long long now, bool natted)
>>>
>>> Nit: indentation.
>>>
>> 
>> ACK
>> 
 +{
 +bool found;
 +
 +if (natted) {
 +/* if the packet has been already natted (e.g. a previous
 + * action took place), retrieve it performing a lookup of its
 + * reverse key. */
 +conn_key_reverse(>key);
 +}
 +
 +found = conn_key_lookup(ct, >key, ctx->hash,
 +now, >conn, >reply);
 +if (natted) {
 +if (OVS_LIKELY(found)) {
 +ctx->reply = !ctx->reply;
 +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
 +ctx->hash = conn_key_hash(>key, ct->hash_basis);
 +} else {
 +/* in case of failure restore the initial key. */
 +conn_key_reverse(>key);
>>>
>>> Can the lookup actually fail?  I mean, if the packet was natted, there
>>> must have been a connection on which it got natted.  Anyway, I think we
>>> should probably also increment a coverage counter.  I guess dropping
>>> such packets would be hard, right?
>>>
>> 
>> I agree, it should not fail. If I'm not missing something, if it
>> happens, it should be because there's been a problem somewhere else
>> (e.g. a polluted ct_state value or more in general an unexpected
>> scenario). For this reason, I think it's better not to drop it or even
>> set it as invalid.
>
> I'm not sure, won't this create horrible to debug bugs when packets get
> forwarded in an unexpected way?  Setting it as invalid isn't good enough
> in my opinion because there might be flows later in the pipeline that
> perform actions (other than drop) on packets with ct_state +inv.
>
> The problem I have (because I don't know the conntrack code) is that I
> see no easy way to drop the packet.
>

I see your point and I understand it.
AFAICS, it doesn't seem to be an easy way to do it.

>> 
>> Yes, the coverage counter gives more meaning to the else branch.
>> Alternatively, we could probably even remove it. I would leave the NULL
>> 

Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-25 Thread Dumitru Ceara
On 6/25/21 2:01 PM, Paolo Valerio wrote:
> Dumitru Ceara  writes:
> 
>> On 6/21/21 12:06 PM, Paolo Valerio wrote:
>>> when a packet gets dnatted and then recirculated, it could be possible
>>> that it matches another rule that performs another nat action.
>>> The kernel datapath handles this situation turning to a no-op the
>>> second nat action, so natting only once the packet.  In the userspace
>>> datapath instead, when the ct action gets executed, an initial lookup
>>> of the translated packet fails to retrieve the connection related to
>>> the packet, leading to the creation of a new entry in ct for the src
>>> nat action with a subsequent failure of the connection establishment.
>>>
>>> with the following flows:
>>>
>>> table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
>>> table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
>>> table=0,priority=10,ip,actions=resubmit(,2)
>>> table=0,priority=10,arp,actions=NORMAL
>>> table=0,priority=0,actions=drop
>>> table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
>>> table=2,in_port=ovs-l0,actions=2
>>> table=2,in_port=ovs-r0,actions=1
>>>
>>> establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:
>>>
>>> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>>> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>>>
>>> with this patch applied the outcome is:
>>>
>>> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>>>
>>> The patch performs, for already natted packets, a lookup of the
>>> reverse key in order to retrieve the related entry, it also adds a
>>> test case that besides testing the scenario ensures that the other ct
>>> actions are executed.
>>>
>>> Reported-by: Dumitru Ceara 
>>> Signed-off-by: Paolo Valerio 
>>> ---
>>
>> Hi Paolo,
>>
>> Thanks for the patch!  I tested it and it works fine for OVN.  I have a
>> few comments/questions below.
>>
> 
> Thanks for the test and for the review.
> 
>>>  lib/conntrack.c |   30 +-
>>>  tests/system-traffic.at |   35 +++
>>>  2 files changed, 64 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/conntrack.c b/lib/conntrack.c
>>> index 99198a601..7e8b16a3e 100644
>>> --- a/lib/conntrack.c
>>> +++ b/lib/conntrack.c
>>> @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
>>> *setmark,
>>>  }
>>>  }
>>>  
>>> +static void
>>> +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
>>> + long long now, bool natted)
>>
>> Nit: indentation.
>>
> 
> ACK
> 
>>> +{
>>> +bool found;
>>> +
>>> +if (natted) {
>>> +/* if the packet has been already natted (e.g. a previous
>>> + * action took place), retrieve it performing a lookup of its
>>> + * reverse key. */
>>> +conn_key_reverse(>key);
>>> +}
>>> +
>>> +found = conn_key_lookup(ct, >key, ctx->hash,
>>> +now, >conn, >reply);
>>> +if (natted) {
>>> +if (OVS_LIKELY(found)) {
>>> +ctx->reply = !ctx->reply;
>>> +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
>>> +ctx->hash = conn_key_hash(>key, ct->hash_basis);
>>> +} else {
>>> +/* in case of failure restore the initial key. */
>>> +conn_key_reverse(>key);
>>
>> Can the lookup actually fail?  I mean, if the packet was natted, there
>> must have been a connection on which it got natted.  Anyway, I think we
>> should probably also increment a coverage counter.  I guess dropping
>> such packets would be hard, right?
>>
> 
> I agree, it should not fail. If I'm not missing something, if it
> happens, it should be because there's been a problem somewhere else
> (e.g. a polluted ct_state value or more in general an unexpected
> scenario). For this reason, I think it's better not to drop it or even
> set it as invalid.

I'm not sure, won't this create horrible to debug bugs when packets get
forwarded in an unexpected way?  Setting it as invalid isn't good enough
in my opinion because there might be flows later in the pipeline that
perform actions (other than drop) on packets with ct_state +inv.

The problem I have (because I don't know the conntrack code) is that I
see no easy way to drop the packet.

> 
> Yes, the coverage counter gives more meaning to the else branch.
> Alternatively, we could probably even remove it. I would leave the NULL
> check or equivalent.
> 
> I have no strong preference.
> WDYT?
> 

I would prefer a coverage counter, at least to have an indication of a
problem "somewhere" in conntrack.

Regards,
Dumitru

___

Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-25 Thread Paolo Valerio
Dumitru Ceara  writes:

> On 6/21/21 12:06 PM, Paolo Valerio wrote:
>> when a packet gets dnatted and then recirculated, it could be possible
>> that it matches another rule that performs another nat action.
>> The kernel datapath handles this situation turning to a no-op the
>> second nat action, so natting only once the packet.  In the userspace
>> datapath instead, when the ct action gets executed, an initial lookup
>> of the translated packet fails to retrieve the connection related to
>> the packet, leading to the creation of a new entry in ct for the src
>> nat action with a subsequent failure of the connection establishment.
>> 
>> with the following flows:
>> 
>> table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
>> table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
>> table=0,priority=10,ip,actions=resubmit(,2)
>> table=0,priority=10,arp,actions=NORMAL
>> table=0,priority=0,actions=drop
>> table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
>> table=2,in_port=ovs-l0,actions=2
>> table=2,in_port=ovs-r0,actions=1
>> 
>> establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:
>> 
>> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>> 
>> with this patch applied the outcome is:
>> 
>> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
>> 
>> The patch performs, for already natted packets, a lookup of the
>> reverse key in order to retrieve the related entry, it also adds a
>> test case that besides testing the scenario ensures that the other ct
>> actions are executed.
>> 
>> Reported-by: Dumitru Ceara 
>> Signed-off-by: Paolo Valerio 
>> ---
>
> Hi Paolo,
>
> Thanks for the patch!  I tested it and it works fine for OVN.  I have a
> few comments/questions below.
>

Thanks for the test and for the review.

>>  lib/conntrack.c |   30 +-
>>  tests/system-traffic.at |   35 +++
>>  2 files changed, 64 insertions(+), 1 deletion(-)
>> 
>> diff --git a/lib/conntrack.c b/lib/conntrack.c
>> index 99198a601..7e8b16a3e 100644
>> --- a/lib/conntrack.c
>> +++ b/lib/conntrack.c
>> @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
>> *setmark,
>>  }
>>  }
>>  
>> +static void
>> +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
>> + long long now, bool natted)
>
> Nit: indentation.
>

ACK

>> +{
>> +bool found;
>> +
>> +if (natted) {
>> +/* if the packet has been already natted (e.g. a previous
>> + * action took place), retrieve it performing a lookup of its
>> + * reverse key. */
>> +conn_key_reverse(>key);
>> +}
>> +
>> +found = conn_key_lookup(ct, >key, ctx->hash,
>> +now, >conn, >reply);
>> +if (natted) {
>> +if (OVS_LIKELY(found)) {
>> +ctx->reply = !ctx->reply;
>> +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
>> +ctx->hash = conn_key_hash(>key, ct->hash_basis);
>> +} else {
>> +/* in case of failure restore the initial key. */
>> +conn_key_reverse(>key);
>
> Can the lookup actually fail?  I mean, if the packet was natted, there
> must have been a connection on which it got natted.  Anyway, I think we
> should probably also increment a coverage counter.  I guess dropping
> such packets would be hard, right?
>

I agree, it should not fail. If I'm not missing something, if it
happens, it should be because there's been a problem somewhere else
(e.g. a polluted ct_state value or more in general an unexpected
scenario). For this reason, I think it's better not to drop it or even
set it as invalid.

Yes, the coverage counter gives more meaning to the else branch.
Alternatively, we could probably even remove it. I would leave the NULL
check or equivalent.

I have no strong preference.
WDYT?

> Thanks,
> Dumitru

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-24 Thread Dumitru Ceara
On 6/21/21 12:06 PM, Paolo Valerio wrote:
> when a packet gets dnatted and then recirculated, it could be possible
> that it matches another rule that performs another nat action.
> The kernel datapath handles this situation turning to a no-op the
> second nat action, so natting only once the packet.  In the userspace
> datapath instead, when the ct action gets executed, an initial lookup
> of the translated packet fails to retrieve the connection related to
> the packet, leading to the creation of a new entry in ct for the src
> nat action with a subsequent failure of the connection establishment.
> 
> with the following flows:
> 
> table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
> table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
> table=0,priority=10,ip,actions=resubmit(,2)
> table=0,priority=10,arp,actions=NORMAL
> table=0,priority=0,actions=drop
> table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
> table=2,in_port=ovs-l0,actions=2
> table=2,in_port=ovs-r0,actions=1
> 
> establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:
> 
> tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
> 
> with this patch applied the outcome is:
> 
> tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
> 
> The patch performs, for already natted packets, a lookup of the
> reverse key in order to retrieve the related entry, it also adds a
> test case that besides testing the scenario ensures that the other ct
> actions are executed.
> 
> Reported-by: Dumitru Ceara 
> Signed-off-by: Paolo Valerio 
> ---

Hi Paolo,

Thanks for the patch!  I tested it and it works fine for OVN.  I have a
few comments/questions below.

>  lib/conntrack.c |   30 +-
>  tests/system-traffic.at |   35 +++
>  2 files changed, 64 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/conntrack.c b/lib/conntrack.c
> index 99198a601..7e8b16a3e 100644
> --- a/lib/conntrack.c
> +++ b/lib/conntrack.c
> @@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t 
> *setmark,
>  }
>  }
>  
> +static void
> +initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
> + long long now, bool natted)

Nit: indentation.

> +{
> +bool found;
> +
> +if (natted) {
> +/* if the packet has been already natted (e.g. a previous
> + * action took place), retrieve it performing a lookup of its
> + * reverse key. */
> +conn_key_reverse(>key);
> +}
> +
> +found = conn_key_lookup(ct, >key, ctx->hash,
> +now, >conn, >reply);
> +if (natted) {
> +if (OVS_LIKELY(found)) {
> +ctx->reply = !ctx->reply;
> +ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
> +ctx->hash = conn_key_hash(>key, ct->hash_basis);
> +} else {
> +/* in case of failure restore the initial key. */
> +conn_key_reverse(>key);

Can the lookup actually fail?  I mean, if the packet was natted, there
must have been a connection on which it got natted.  Anyway, I think we
should probably also increment a coverage counter.  I guess dropping
such packets would be hard, right?

Thanks,
Dumitru

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v7 1/4] conntrack: handle already natted packets

2021-06-21 Thread Paolo Valerio
when a packet gets dnatted and then recirculated, it could be possible
that it matches another rule that performs another nat action.
The kernel datapath handles this situation turning to a no-op the
second nat action, so natting only once the packet.  In the userspace
datapath instead, when the ct action gets executed, an initial lookup
of the translated packet fails to retrieve the connection related to
the packet, leading to the creation of a new entry in ct for the src
nat action with a subsequent failure of the connection establishment.

with the following flows:

table=0,priority=30,in_port=1,ip,nw_dst=192.168.2.100,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
table=0,priority=20,in_port=2,ip,actions=ct(nat,table=1)
table=0,priority=10,ip,actions=resubmit(,2)
table=0,priority=10,arp,actions=NORMAL
table=0,priority=0,actions=drop
table=1,priority=5,ip,actions=ct(commit,nat(src=10.1.1.240),table=2)
table=2,in_port=ovs-l0,actions=2
table=2,in_port=ovs-r0,actions=1

establishing a connection from 10.1.1.1 to 192.168.2.100 the outcome is:

tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.240,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)
tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

with this patch applied the outcome is:

tcp,orig=(src=10.1.1.1,dst=192.168.2.100,sport=4000,dport=80),reply=(src=10.1.1.2,dst=10.1.1.1,sport=80,dport=4000),protoinfo=(state=ESTABLISHED)

The patch performs, for already natted packets, a lookup of the
reverse key in order to retrieve the related entry, it also adds a
test case that besides testing the scenario ensures that the other ct
actions are executed.

Reported-by: Dumitru Ceara 
Signed-off-by: Paolo Valerio 
---
 lib/conntrack.c |   30 +-
 tests/system-traffic.at |   35 +++
 2 files changed, 64 insertions(+), 1 deletion(-)

diff --git a/lib/conntrack.c b/lib/conntrack.c
index 99198a601..7e8b16a3e 100644
--- a/lib/conntrack.c
+++ b/lib/conntrack.c
@@ -1281,6 +1281,33 @@ process_one_fast(uint16_t zone, const uint32_t *setmark,
 }
 }
 
+static void
+initial_conn_lookup(struct conntrack *ct, struct conn_lookup_ctx *ctx,
+ long long now, bool natted)
+{
+bool found;
+
+if (natted) {
+/* if the packet has been already natted (e.g. a previous
+ * action took place), retrieve it performing a lookup of its
+ * reverse key. */
+conn_key_reverse(>key);
+}
+
+found = conn_key_lookup(ct, >key, ctx->hash,
+now, >conn, >reply);
+if (natted) {
+if (OVS_LIKELY(found)) {
+ctx->reply = !ctx->reply;
+ctx->key = ctx->reply ? ctx->conn->rev_key : ctx->conn->key;
+ctx->hash = conn_key_hash(>key, ct->hash_basis);
+} else {
+/* in case of failure restore the initial key. */
+conn_key_reverse(>key);
+}
+}
+}
+
 static void
 process_one(struct conntrack *ct, struct dp_packet *pkt,
 struct conn_lookup_ctx *ctx, uint16_t zone,
@@ -1296,7 +1323,8 @@ process_one(struct conntrack *ct, struct dp_packet *pkt,
 }
 
 bool create_new_conn = false;
-conn_key_lookup(ct, >key, ctx->hash, now, >conn, >reply);
+initial_conn_lookup(ct, ctx, now, !!(pkt->md.ct_state &
+ (CS_SRC_NAT | CS_DST_NAT)));
 struct conn *conn = ctx->conn;
 
 /* Delete found entry if in wrong direction. 'force' implies commit. */
diff --git a/tests/system-traffic.at b/tests/system-traffic.at
index 6a15d08c2..f400cfabc 100644
--- a/tests/system-traffic.at
+++ b/tests/system-traffic.at
@@ -4588,6 +4588,41 @@ 
tcp,orig=(src=10.1.1.1,dst=10.1.1.2,sport=,dport=),reply=(src=
 OVS_TRAFFIC_VSWITCHD_STOP
 AT_CLEANUP
 
+AT_SETUP([conntrack - DNAT with additional SNAT])
+CHECK_CONNTRACK()
+OVS_TRAFFIC_VSWITCHD_START()
+
+ADD_NAMESPACES(at_ns0, at_ns1)
+ADD_VETH(p0, at_ns0, br0, "10.1.1.1/24")
+ADD_VETH(p1, at_ns1, br0, "10.1.1.2/24")
+NS_CHECK_EXEC([at_ns0], [ip route add 172.1.1.0/24 via 10.1.1.2])
+
+OVS_START_L7([at_ns1], [http])
+
+AT_DATA([flows.txt], [dnl
+table=0,priority=30,in_port=1,ip,nw_dst=172.1.1.2,actions=ct(commit,nat(dst=10.1.1.2:80),table=1)
+table=0,priority=20,in_port=2,ip,actions=ct(nat),1
+table=0,priority=10,arp,actions=NORMAL
+table=0,priority=1,actions=drop
+dnl Be sure all ct() actions but src nat are executed
+table=1,ip,actions=ct(commit,nat(src=10.1.1.240),exec(set_field:0xac->ct_mark,set_field:0xac->ct_label),table=2)
+table=2,in_port=1,ip,ct_mark=0xac,ct_label=0xac,actions=2
+])
+AT_CHECK([ovs-ofctl --bundle add-flows br0 flows.txt])
+
+NS_CHECK_EXEC([at_ns0], [wget http://172.1.1.2:8080 -t 5 -T 1 
--retry-connrefused -v -o wget0.log])
+
+dnl - make sure only dst nat has been performed
+AT_CHECK([ovs-appctl dpctl/dump-conntrack |  FORMAT_CT(10.1.1.240)],