Re: [PATCH] make net_gso_ok return false when gso_type is zero(invalid)
> Note that TCP stack now works with GSO being always on. > 0a6b2a1dc2a2 ("tcp: switch to GSO being always on") I've tested on the latest net-next branch 17dec0a949153d9ac00760ba2f5b78cb583e995f. The problem still exists. My patch won't work. Reverting commit 0a6b2a1dc2a2 won't help.
Re: [PATCH] make net_gso_ok return false when gso_type is zero(invalid)
2018-04-10 18:32 GMT+02:00 Marcelo Ricardo Leitner : > On Sun, Apr 08, 2018 at 08:41:21PM +0200, Wenhua Shi wrote: >> 2018-04-08 18:51 GMT+02:00 David Miller : >> > >> > From: Wenhua Shi >> > Date: Fri, 6 Apr 2018 03:43:39 +0200 >> > >> > > Signed-off-by: Wenhua Shi >> > >> > This precondition should be made impossible instead of having to do >> > an extra check everywhere that this helper is invoked, many of which >> > are in fast paths. >> >> I believe the precondition you said is quite true. In my situation, I >> have to disable GSO for some packet and I notice that it leads to a >> worse performance (slower than 1Mbps, was almost 800Mbps). >> >> Here's the hook I use on debian 9.4, kernel version 4.9: > > There is quite a distance between 4.9 and net/net-next. Did you test > on a more recent kernel too? > > Note that TCP stack now works with GSO being always on. > 0a6b2a1dc2a2 ("tcp: switch to GSO being always on") > I've tried testing on the Fedora rawhide channel. The kernel version is 4.17.0. Detail information is attached. Without the hook [root@fedora-s-1vcpu-1gb-sfo1-01 testing]# iperf -c myanothernormalmachine -d Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) Client connecting to myanothernormalmachine, TCP port 5001 TCP window size: 85.0 KByte (default) [ 3] local 107.170.240.XXX port 44692 connected with 104.131.148.XXX port 5001 [ 5] local 107.170.240.XXX port 5001 connected with 104.131.148.XXX port 53978 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.04 GBytes 892 Mbits/sec [ 5] 0.0-10.0 sec 757 MBytes 638 Mbits/sec With the hook [root@fedora-s-1vcpu-1gb-sfo1-01 testing]# iperf -c myanothernormalmachine -d Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) Client connecting to myanothernormalmachine, TCP port 5001 TCP window size: 85.0 KByte (default) [ 3] local 107.170.240.XXX port 44694 connected with 104.131.148.XXX port 5001 [ 5] local 107.170.240.XXX port 5001 connected with 104.131.148.XXX port 53980 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 1.04 GBytes 894 Mbits/sec [ 3] 0.0-13.5 sec 170 KBytes 103 Kbits/sec Kernel [root@fedora-s-1vcpu-1gb-sfo1-01 testing]# uname -a Linux fedora-s-1vcpu-1gb-sfo1-01.localdomain 4.17.0-0.rc0.git5.2.fc29.x86_64 #1 SMP Mon Apr 9 17:16:30 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux Hook Source Code [root@fedora-s-1vcpu-1gb-sfo1-01 testing]# cat testing.c #include #include #include #include #include #include #include #include #include #include unsigned int hook_outgoing( void * priv, struct sk_buff * skb, const struct nf_hook_state * state) { printk(KERN_INFO "Hook working...\n"); /* for some reason I have to disable GSO */ skb_gso_reset(skb); /* The following won't work any more. */ // skb->sk->sk_gso_type = ~0; return NF_ACCEPT; } static struct nf_hook_ops hook = { .hook = hook_outgoing, .pf = PF_INET, .hooknum = NF_INET_POST_ROUTING, .priority = NF_IP_PRI_LAST, }; static int __init init_testing(void) { nf_register_net_hook(&init_net, &hook); return 0; } static void __exit exit_testing(void) { nf_unregister_net_hook(&init_net, &hook); } MODULE_LICENSE("GPL"); module_init(init_testing); module_exit(exit_testing); It turns out the problem exists and my previous bypassing trick is not working any more. I'm now testing whether the patch is working for the latest net-next branch.
Re: [PATCH] make net_gso_ok return false when gso_type is zero(invalid)
2018-04-08 18:51 GMT+02:00 David Miller : > > From: Wenhua Shi > Date: Fri, 6 Apr 2018 03:43:39 +0200 > > > Signed-off-by: Wenhua Shi > > This precondition should be made impossible instead of having to do > an extra check everywhere that this helper is invoked, many of which > are in fast paths. I believe the precondition you said is quite true. In my situation, I have to disable GSO for some packet and I notice that it leads to a worse performance (slower than 1Mbps, was almost 800Mbps). Here's the hook I use on debian 9.4, kernel version 4.9: #include #include #include #include #include #include #include #include #include unsigned int hook_outgoing ( void * priv, struct sk_buff * skb, const struct nf_hook_state * state) { /* for some reason I have to disable GSO */ skb_gso_reset(skb); /* After I force sk_can_gso to return false here, the performance comes back normal. */ // skb->sk->sk_gso_type = ~0; return NF_ACCEPT; } static struct nf_hook_ops hook = { .hook = hook_outgoing, .pf = PF_INET, .hooknum = NF_INET_POST_ROUTING, .priority = NF_IP_PRI_LAST, }; static int __init init_testing(void) { nf_register_hook(&hook); return 0; } static void __exit exit_testing(void) { nf_unregister_hook(&hook); } module_init(init_testing); module_exit(exit_testing); Here are the performance measurements. Without the previous hook: root@debian-s-1vcpu-1gb-sfo1-01:~/test# iperf -c myanothernormaldebian -d Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) Client connecting to myanothernormaldebian, TCP port 5001 TCP window size: 255 KByte (default) [ 3] local 192.241.204.XXX port 60528 connected with 104.131.148.XXX port 5001 [ 5] local 192.241.204.XXX port 5001 connected with 104.131.148.XXX port 58576 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 922 MBytes 773 Mbits/sec [ 5] 0.0-10.1 sec 1.00 GBytes 849 Mbits/sec And with the previous hook: root@debian-s-1vcpu-1gb-sfo1-01:~/test# iperf -c myanothernormaldebian -d Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) Client connecting to myanothernormaldebian, TCP port 5001 TCP window size: 85.0 KByte (default) [ 3] local 192.241.204.XXX port 60530 connected with 104.131.148.XXX port 5001 [ 5] local 192.241.204.XXX port 5001 connected with 104.131.148.XXX port 58578 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.2 sec 1.02 GBytes 864 Mbits/sec [ 3] 0.0-13.5 sec 170 KBytes 103 Kbits/sec Or it's just because of that I'm disabling the GSO in a wrong way?
[PATCH] make net_gso_ok return false when gso_type is zero(invalid)
Signed-off-by: Wenhua Shi --- include/linux/netdevice.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index cf44503e..1f26cbcf 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -4187,7 +4187,7 @@ static inline bool net_gso_ok(netdev_features_t features, int gso_type) BUILD_BUG_ON(SKB_GSO_ESP != (NETIF_F_GSO_ESP >> NETIF_F_GSO_SHIFT)); BUILD_BUG_ON(SKB_GSO_UDP != (NETIF_F_GSO_UDP >> NETIF_F_GSO_SHIFT)); - return (features & feature) == feature; + return feature && (features & feature) == feature; } static inline bool skb_gso_ok(struct sk_buff *skb, netdev_features_t features) -- 2.11.0
[PATCH] fix typo in skbuff.c
--- net/core/skbuff.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 16982de6..e62476be 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1896,7 +1896,7 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta) } /* If we need update frag list, we are in troubles. -* Certainly, it possible to add an offset to skb data, +* Certainly, it is possible to add an offset to skb data, * but taking into account that pulling is expected to * be very rare operation, it is worth to fight against * further bloating skb head and crucify ourselves here instead. -- 2.11.0