Re: [PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2021-01-11 Thread patchwork-bot+netdevbpf
Hello:

This patch was applied to netdev/net-next.git (refs/heads/master):

On Sun, 10 Jan 2021 16:30:59 +0200 you wrote:
> From: Stefan Chulski 
> 
> Patch didn't fix any issue, just improve parse flow
> and align ipv4 parse flow with ipv6 parse flow.
> 
> Currently ipv4 kenguru parser first check IP protocol(TCP/UDP)
> and then destination IP address.
> Patch introduce reverse ipv4 parse, first destination IP address parsed
> and only then IP protocol.
> This would allow extend capability for packet L4 parsing and align ipv4
> parsing flow with ipv6.
> 
> [...]

Here is the summary with links:
  - [net-next] net: mvpp2: prs: improve ipv4 parse flow
https://git.kernel.org/netdev/net-next/c/c73a45965dd5

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html




[PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2021-01-10 Thread stefanc
From: Stefan Chulski 

Patch didn't fix any issue, just improve parse flow
and align ipv4 parse flow with ipv6 parse flow.

Currently ipv4 kenguru parser first check IP protocol(TCP/UDP)
and then destination IP address.
Patch introduce reverse ipv4 parse, first destination IP address parsed
and only then IP protocol.
This would allow extend capability for packet L4 parsing and align ipv4
parsing flow with ipv6.

Suggested-by: Liron Himi 
Signed-off-by: Stefan Chulski 
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c | 64 
 1 file changed, 39 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c 
b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
index 5692c60..b9e5b08 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
@@ -882,15 +882,15 @@ static int mvpp2_prs_ip4_proto(struct mvpp2 *priv, 
unsigned short proto,
mvpp2_prs_tcam_lu_set(, MVPP2_PRS_LU_IP4);
pe.index = tid;
 
-   /* Set next lu to IPv4 */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
-   mvpp2_prs_sram_shift_set(, 12, MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+   /* Finished: go to flowid generation */
+   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_FLOWS);
+   mvpp2_prs_sram_bits_set(, MVPP2_PRS_SRAM_LU_GEN_BIT, 1);
+
/* Set L4 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L4,
  sizeof(struct iphdr) - 4,
  MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
-   mvpp2_prs_sram_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
-MVPP2_PRS_IPV4_DIP_AI_BIT);
+   mvpp2_prs_sram_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
mvpp2_prs_sram_ri_update(, ri, ri_mask | MVPP2_PRS_RI_IP_FRAG_MASK);
 
mvpp2_prs_tcam_data_byte_set(, 2, 0x00,
@@ -899,7 +899,8 @@ static int mvpp2_prs_ip4_proto(struct mvpp2 *priv, unsigned 
short proto,
 MVPP2_PRS_TCAM_PROTO_MASK);
 
mvpp2_prs_tcam_data_byte_set(, 5, proto, MVPP2_PRS_TCAM_PROTO_MASK);
-   mvpp2_prs_tcam_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
+   mvpp2_prs_tcam_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
+MVPP2_PRS_IPV4_DIP_AI_BIT);
/* Unmask all ports */
mvpp2_prs_tcam_port_map_set(, MVPP2_PRS_PORT_MASK);
 
@@ -967,12 +968,17 @@ static int mvpp2_prs_ip4_cast(struct mvpp2 *priv, 
unsigned short l3_cast)
return -EINVAL;
}
 
-   /* Finished: go to flowid generation */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_FLOWS);
-   mvpp2_prs_sram_bits_set(, MVPP2_PRS_SRAM_LU_GEN_BIT, 1);
+   /* Go again to ipv4 */
+   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
 
-   mvpp2_prs_tcam_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
+   mvpp2_prs_sram_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
 MVPP2_PRS_IPV4_DIP_AI_BIT);
+
+   /* Shift back to IPv4 proto */
+   mvpp2_prs_sram_shift_set(, -12, MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+
+   mvpp2_prs_tcam_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
+
/* Unmask all ports */
mvpp2_prs_tcam_port_map_set(, MVPP2_PRS_PORT_MASK);
 
@@ -1392,8 +1398,9 @@ static int mvpp2_prs_etype_init(struct mvpp2 *priv)
mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
mvpp2_prs_sram_ri_update(, MVPP2_PRS_RI_L3_IP4,
 MVPP2_PRS_RI_L3_PROTO_MASK);
-   /* Skip eth_type + 4 bytes of IP header */
-   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN + 4,
+   /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */
+   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN +
+sizeof(struct iphdr) - 4,
 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
/* Set L3 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L3,
@@ -1597,8 +1604,9 @@ static int mvpp2_prs_pppoe_init(struct mvpp2 *priv)
mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
mvpp2_prs_sram_ri_update(, MVPP2_PRS_RI_L3_IP4_OPT,
 MVPP2_PRS_RI_L3_PROTO_MASK);
-   /* Skip eth_type + 4 bytes of IP header */
-   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN + 4,
+   /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */
+   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN +
+sizeof(struct iphdr) - 4,
 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
/* Set L3 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L3,
@@ -1727,19 +1735,20 @@ static int mvpp2_prs_ip4_init(struct mvpp2 *priv)
mvpp2_prs_tcam_lu_set(, MVPP2_PRS_LU_IP4);
pe.index = MVPP2_PE_IP4_PROTO_UN;
 
-   /* Set next lu to IPv4 */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
-   mvpp2_prs_sram_shift_set(, 12, 

Re: [EXT] Re: [PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2020-12-21 Thread Jakub Kicinski
On Sun, 20 Dec 2020 11:11:35 + Stefan Chulski wrote:
> > RFC patches sent for review only are obviously welcome at any time.  
> 
> If I post RFC patches for review only, should I add some prefix or tag for 
> this?

Include RFC in the tag: [RFC net-next] or [PATCH RFC net-next],
this way patchwork will automatically mark it as RFC and we'll
know you're not expecting us to apply the patch.

> And if all reviewers OK with change(or no comments at all), should I
> repost this patch again after net-next opened?

Sure, if there are no comments or you're confident the change is
correct there is no need for an RFC posting. I'm guessing that was 
your question? If you're asking if you _have_ to repost even if there 
are not comments the answer is yes, we don't queue patches "to be
applied later", fresh posting will be needed.


RE: [EXT] Re: [PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2020-12-20 Thread Stefan Chulski
> 
> --
> On Thu, 17 Dec 2020 18:07:58 +0200 stef...@marvell.com wrote:
> > From: Stefan Chulski 
> >
> > Patch didn't fix any issue, just improve parse flow and align ipv4
> > parse flow with ipv6 parse flow.
> >
> > Currently ipv4 kenguru parser first check IP protocol(TCP/UDP) and
> > then destination IP address.
> > Patch introduce reverse ipv4 parse, first destination IP address
> > parsed and only then IP protocol.
> > This would allow extend capability for packet L4 parsing and align
> > ipv4 parsing flow with ipv6.
> >
> > Suggested-by: Liron Himi 
> > Signed-off-by: Stefan Chulski 
> 
> This one will need to wait until after the merge window
> 
> --
> 
> # Form letter - net-next is closed
> 
> We have already sent the networking pull request for 5.11 and therefore net-
> next is closed for new drivers, features, code refactoring and optimizations.
> We are currently accepting bug fixes only.
> 
> Please repost when net-next reopens after 5.11-rc1 is cut.

OK, Thanks.

> Look out for the announcement on the mailing list or check:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__vger.kernel.org_-
> 7Edavem_net-
> 2Dnext.html=DwICAg=nKjWec2b6R0mOyPaz7xtfQ=DDQ3dKwkTIxKAl
> 6_Bs7GMx4zhJArrXKN2mDMOXGh7lg=2CcDqbEJMvxpx15rGBe2og6oh1eZ
> hVee8xvK-mjfd0E=r1d6bSIPQmjwJqe-
> mkU_s5wyqHOU82D18G6SkVuUg5A=
> 
> RFC patches sent for review only are obviously welcome at any time.

If I post RFC patches for review only, should I add some prefix or tag for this?
And if all reviewers OK with change(or no comments at all), should I repost 
this patch again after net-next opened?

Thanks,
Stefan.


Re: [PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2020-12-19 Thread Jakub Kicinski
On Thu, 17 Dec 2020 18:07:58 +0200 stef...@marvell.com wrote:
> From: Stefan Chulski 
> 
> Patch didn't fix any issue, just improve parse flow
> and align ipv4 parse flow with ipv6 parse flow.
> 
> Currently ipv4 kenguru parser first check IP protocol(TCP/UDP)
> and then destination IP address.
> Patch introduce reverse ipv4 parse, first destination IP address parsed
> and only then IP protocol.
> This would allow extend capability for packet L4 parsing and align ipv4
> parsing flow with ipv6.
> 
> Suggested-by: Liron Himi 
> Signed-off-by: Stefan Chulski 

This one will need to wait until after the merge window

--

# Form letter - net-next is closed

We have already sent the networking pull request for 5.11 and therefore
net-next is closed for new drivers, features, code refactoring and
optimizations. We are currently accepting bug fixes only.

Please repost when net-next reopens after 5.11-rc1 is cut.

Look out for the announcement on the mailing list or check:
http://vger.kernel.org/~davem/net-next.html

RFC patches sent for review only are obviously welcome at any time.


[PATCH net-next] net: mvpp2: prs: improve ipv4 parse flow

2020-12-17 Thread stefanc
From: Stefan Chulski 

Patch didn't fix any issue, just improve parse flow
and align ipv4 parse flow with ipv6 parse flow.

Currently ipv4 kenguru parser first check IP protocol(TCP/UDP)
and then destination IP address.
Patch introduce reverse ipv4 parse, first destination IP address parsed
and only then IP protocol.
This would allow extend capability for packet L4 parsing and align ipv4
parsing flow with ipv6.

Suggested-by: Liron Himi 
Signed-off-by: Stefan Chulski 
---
 drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c | 64 --
 1 file changed, 39 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c 
b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
index 5692c60..b9e5b08 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
@@ -882,15 +882,15 @@ static int mvpp2_prs_ip4_proto(struct mvpp2 *priv, 
unsigned short proto,
mvpp2_prs_tcam_lu_set(, MVPP2_PRS_LU_IP4);
pe.index = tid;
 
-   /* Set next lu to IPv4 */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
-   mvpp2_prs_sram_shift_set(, 12, MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+   /* Finished: go to flowid generation */
+   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_FLOWS);
+   mvpp2_prs_sram_bits_set(, MVPP2_PRS_SRAM_LU_GEN_BIT, 1);
+
/* Set L4 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L4,
  sizeof(struct iphdr) - 4,
  MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
-   mvpp2_prs_sram_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
-MVPP2_PRS_IPV4_DIP_AI_BIT);
+   mvpp2_prs_sram_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
mvpp2_prs_sram_ri_update(, ri, ri_mask | MVPP2_PRS_RI_IP_FRAG_MASK);
 
mvpp2_prs_tcam_data_byte_set(, 2, 0x00,
@@ -899,7 +899,8 @@ static int mvpp2_prs_ip4_proto(struct mvpp2 *priv, unsigned 
short proto,
 MVPP2_PRS_TCAM_PROTO_MASK);
 
mvpp2_prs_tcam_data_byte_set(, 5, proto, MVPP2_PRS_TCAM_PROTO_MASK);
-   mvpp2_prs_tcam_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
+   mvpp2_prs_tcam_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
+MVPP2_PRS_IPV4_DIP_AI_BIT);
/* Unmask all ports */
mvpp2_prs_tcam_port_map_set(, MVPP2_PRS_PORT_MASK);
 
@@ -967,12 +968,17 @@ static int mvpp2_prs_ip4_cast(struct mvpp2 *priv, 
unsigned short l3_cast)
return -EINVAL;
}
 
-   /* Finished: go to flowid generation */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_FLOWS);
-   mvpp2_prs_sram_bits_set(, MVPP2_PRS_SRAM_LU_GEN_BIT, 1);
+   /* Go again to ipv4 */
+   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
 
-   mvpp2_prs_tcam_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
+   mvpp2_prs_sram_ai_update(, MVPP2_PRS_IPV4_DIP_AI_BIT,
 MVPP2_PRS_IPV4_DIP_AI_BIT);
+
+   /* Shift back to IPv4 proto */
+   mvpp2_prs_sram_shift_set(, -12, MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+
+   mvpp2_prs_tcam_ai_update(, 0, MVPP2_PRS_IPV4_DIP_AI_BIT);
+
/* Unmask all ports */
mvpp2_prs_tcam_port_map_set(, MVPP2_PRS_PORT_MASK);
 
@@ -1392,8 +1398,9 @@ static int mvpp2_prs_etype_init(struct mvpp2 *priv)
mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
mvpp2_prs_sram_ri_update(, MVPP2_PRS_RI_L3_IP4,
 MVPP2_PRS_RI_L3_PROTO_MASK);
-   /* Skip eth_type + 4 bytes of IP header */
-   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN + 4,
+   /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */
+   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN +
+sizeof(struct iphdr) - 4,
 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
/* Set L3 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L3,
@@ -1597,8 +1604,9 @@ static int mvpp2_prs_pppoe_init(struct mvpp2 *priv)
mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
mvpp2_prs_sram_ri_update(, MVPP2_PRS_RI_L3_IP4_OPT,
 MVPP2_PRS_RI_L3_PROTO_MASK);
-   /* Skip eth_type + 4 bytes of IP header */
-   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN + 4,
+   /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */
+   mvpp2_prs_sram_shift_set(, MVPP2_ETH_TYPE_LEN +
+sizeof(struct iphdr) - 4,
 MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
/* Set L3 offset */
mvpp2_prs_sram_offset_set(, MVPP2_PRS_SRAM_UDF_TYPE_L3,
@@ -1727,19 +1735,20 @@ static int mvpp2_prs_ip4_init(struct mvpp2 *priv)
mvpp2_prs_tcam_lu_set(, MVPP2_PRS_LU_IP4);
pe.index = MVPP2_PE_IP4_PROTO_UN;
 
-   /* Set next lu to IPv4 */
-   mvpp2_prs_sram_next_lu_set(, MVPP2_PRS_LU_IP4);
-   mvpp2_prs_sram_shift_set(, 12,