Re: [PATCH net-next 0/1 v2] skbuff: Extend gso_type to unsigned int.

2017-04-09 Thread Alexey Dobriyan
>  struct skb_shared_info {
> + unsigned short  _unused;
>   unsigned char   nr_frags;

This makes _all_ fields to be accessed with offset, but if you move
padding down, at least ->nr_frags will enjoy clean and simple [R64]
addressing.

On allyesconfig-ish kernel:

before: +542 = 720-178
 after: -158 =  53-211 (negative, because 16-bit field became 32-bit)

You may want to add configurable redzone there instead of padding
if that what you want. 2 bytes is nothing.


Re: [PATCH net-next 0/1 v2] skbuff: Extend gso_type to unsigned int.

2017-04-08 Thread David Miller
From: Eric Dumazet 
Date: Sat, 08 Apr 2017 11:55:32 -0700

> On Sat, 2017-04-08 at 20:36 +0200, Steffen Klassert wrote:
>> All available gso_type flags are currently in use, so
>> extend gso_type from 'unsigned short' to 'unsigned int'
>> to be able to add further flags.
>> 
>> We reorder the struct skb_shared_info to use
>> two bytes of the four byte hole before dataref.
>> All fields before dataref are cleared, i.e.
>> four bytes more than before the change.
>> 
>> The remaining two byte hole is moved to the
>> beginning of the structure, this protects us
>> from immediate overwites on out of bound writes
>> to the sk_buff head.
> 
>> Signed-off-by: Steffen Klassert 
...
> This looks much better, thanks Steffen.
> 
> Acked-by: Eric Dumazet 

Applied, thanks everyone.


Re: [PATCH net-next 0/1 v2] skbuff: Extend gso_type to unsigned int.

2017-04-08 Thread Eric Dumazet
On Sat, 2017-04-08 at 20:36 +0200, Steffen Klassert wrote:
> All available gso_type flags are currently in use, so
> extend gso_type from 'unsigned short' to 'unsigned int'
> to be able to add further flags.
> 
> We reorder the struct skb_shared_info to use
> two bytes of the four byte hole before dataref.
> All fields before dataref are cleared, i.e.
> four bytes more than before the change.
> 
> The remaining two byte hole is moved to the
> beginning of the structure, this protects us
> from immediate overwites on out of bound writes
> to the sk_buff head.

> Signed-off-by: Steffen Klassert 
> ---
>  include/linux/skbuff.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index c776abd..741d75c 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -413,14 +413,15 @@ struct ubuf_info {
>   * the end of the header data, ie. at skb->end.
>   */
>  struct skb_shared_info {
> + unsigned short  _unused;
>   unsigned char   nr_frags;
>   __u8tx_flags;
>   unsigned short  gso_size;
>   /* Warning: this field is not always filled in (UFO)! */
>   unsigned short  gso_segs;
> - unsigned short  gso_type;
>   struct sk_buff  *frag_list;
>   struct skb_shared_hwtstamps hwtstamps;
> + unsigned intgso_type;
>   u32 tskey;
>   __be32  ip6_frag_id;
>  

This looks much better, thanks Steffen.

Acked-by: Eric Dumazet 





[PATCH net-next 0/1 v2] skbuff: Extend gso_type to unsigned int.

2017-04-08 Thread Steffen Klassert
All available gso_type flags are currently in use, so
extend gso_type from 'unsigned short' to 'unsigned int'
to be able to add further flags.

We reorder the struct skb_shared_info to use
two bytes of the four byte hole before dataref.
All fields before dataref are cleared, i.e.
four bytes more than before the change.

The remaining two byte hole is moved to the
beginning of the structure, this protects us
from immediate overwites on out of bound writes
to the sk_buff head.

Structure layout on x86-64 before the change:

struct skb_shared_info {
unsigned char  nr_frags; /* 0 1 */
__u8   tx_flags; /* 1 1 */
short unsigned int gso_size; /* 2 2 */
short unsigned int gso_segs; /* 4 2 */
short unsigned int gso_type; /* 6 2 */
struct sk_buff *   frag_list;/* 8 8 */
struct skb_shared_hwtstamps hwtstamps;   /*16 8 */
u32tskey;/*24 4 */
__be32 ip6_frag_id;  /*28 4 */
atomic_t   dataref;  /*32 4 */

/* XXX 4 bytes hole, try to pack */

void * destructor_arg;   /*40 8 */
skb_frag_t frags[17];/*48   272 */
/* --- cacheline 5 boundary (320 bytes) --- */

/* size: 320, cachelines: 5, members: 12 */
/* sum members: 316, holes: 1, sum holes: 4 */
};

Structure layout on x86-64 after the change:

struct skb_shared_info {
short unsigned int _unused;  /* 0 2 */
unsigned char  nr_frags; /* 2 1 */
__u8   tx_flags; /* 3 1 */
short unsigned int gso_size; /* 4 2 */
short unsigned int gso_segs; /* 6 2 */
struct sk_buff *   frag_list;/* 8 8 */
struct skb_shared_hwtstamps hwtstamps;   /*16 8 */
unsigned int   gso_type; /*24 4 */
u32tskey;/*28 4 */
__be32 ip6_frag_id;  /*32 4 */
atomic_t   dataref;  /*36 4 */
void * destructor_arg;   /*40 8 */
skb_frag_t frags[17];/*48   272 */
/* --- cacheline 5 boundary (320 bytes) --- */

/* size: 320, cachelines: 5, members: 13 */
};

Signed-off-by: Steffen Klassert 
---
 include/linux/skbuff.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index c776abd..741d75c 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -413,14 +413,15 @@ struct ubuf_info {
  * the end of the header data, ie. at skb->end.
  */
 struct skb_shared_info {
+   unsigned short  _unused;
unsigned char   nr_frags;
__u8tx_flags;
unsigned short  gso_size;
/* Warning: this field is not always filled in (UFO)! */
unsigned short  gso_segs;
-   unsigned short  gso_type;
struct sk_buff  *frag_list;
struct skb_shared_hwtstamps hwtstamps;
+   unsigned intgso_type;
u32 tskey;
__be32  ip6_frag_id;
 
-- 
2.7.4



[PATCH net-next 0/1 v2] skbuff: Extend gso_type to unsigned int

2017-04-08 Thread Steffen Klassert
We need a GSO type for IPsec ESP to be able to integrate the IPsec
hardware offloading infrastructure. Unfortunately, all gso_type flags
are currently in use. This change extends gso_type from 'unsigned short'
to 'unsigned int'.

Changes from v1:

- Place the remaining two byte hole in front, suggested by
  Eric Dumazet.

- Skipping the memset of the two byte hole is unnecessary
  as it does not matter if we memset two or four bytes more,
  suggested by Alexander Duyck.