Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, Dec 08, 2016 at 06:19:51PM +0100, Jesper Dangaard Brouer wrote: > > > See patch below signature. > > > > > > Besides I think you misunderstood me, you can adjust: > > > sysctl net.core.rmem_max > > > sysctl net.core.wmem_max > > > > > > And you should if you plan to use/set 851968 as socket size for UDP > > > remote tests, else you will be limited to the "max" values (212992 well > > > actually 425984 2x default value, for reasons I cannot remember) > > > > > > > The intent is to use the larger values to avoid packet loss on > > UDP_STREAM. > > We do seem to misunderstand each-other. > I was just pointing out two things: > > 1. Notice the difference between "max" and "default" proc setting. >Only adjust the "max" setting. > > 2. There was simple BASH-shell script error in your commit. >Patch below fix it. > Understood now. > [PATCH] mmtests: actually use variable SOCKETSIZE_OPT > > From: Jesper Dangaard Brouer > Applied, thanks! -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, 8 Dec 2016 15:11:01 + Mel Gorman wrote: > On Thu, Dec 08, 2016 at 03:48:13PM +0100, Jesper Dangaard Brouer wrote: > > On Thu, 8 Dec 2016 11:06:56 + > > Mel Gorman wrote: > > > > > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > > > > That's expected. In the initial sniff-test, I saw negligible packet > > > > > loss. > > > > > I'm waiting to see what the full set of network tests look like before > > > > > doing any further adjustments. > > > > > > > > For netperf I will not recommend adjusting the global default > > > > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > > > > value from the application (which were the options you setup too low > > > > and just removed). I think you should keep this as the default for now > > > > (unless Eric says something else), as this should cover most users. > > > > > > > > > > Ok, the current state is that buffer sizes are only set for netperf > > > UDP_STREAM and only when running over a real network. The values selected > > > were specific to the network I had available so milage may vary. > > > localhost is left at the defaults. > > > > Looks like you made a mistake when re-implementing using buffer sizes > > for netperf. > > We appear to have a disconnect. This was reintroduced in response to your > comment "For netperf I will not recommend adjusting the global default > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > value from the application". > > My understanding was that netperfs means was the -s and -S switches for > send and recv buffers so I reintroduced them and avoided altering > [r|w]mem_default. > > Leaving the defaults resulted in some UDP packet loss on a 10GbE network > so some upward adjustment. > > From my perspective, either adjusting [r|w]mem_default or specifying -s > -S works for the UDP_STREAM issue but using the switches meant only this > is affected and other loads like sockperf and netpipe will need to be > evaluated separately which I don't mind doing. > > > See patch below signature. > > > > Besides I think you misunderstood me, you can adjust: > > sysctl net.core.rmem_max > > sysctl net.core.wmem_max > > > > And you should if you plan to use/set 851968 as socket size for UDP > > remote tests, else you will be limited to the "max" values (212992 well > > actually 425984 2x default value, for reasons I cannot remember) > > > > The intent is to use the larger values to avoid packet loss on > UDP_STREAM. We do seem to misunderstand each-other. I was just pointing out two things: 1. Notice the difference between "max" and "default" proc setting. Only adjust the "max" setting. 2. There was simple BASH-shell script error in your commit. Patch below fix it. [PATCH] mmtests: actually use variable SOCKETSIZE_OPT From: Jesper Dangaard Brouer commit 7f16226577b2 ("netperf: Set remote and local socket max buffer sizes") removed netperf's setting of the socket buffer sizes and instead used global /proc/sys settings. commit de9f8cdb7146 ("netperf: Only adjust socket sizes for UDP_STREAM") re-added explicit netperf setting socket buffer sizes for remote-host testing (saved in SOCKETSIZE_OPT). Only problem is this variable is not used after commit 7f16226577b2. Simply use $SOCKETSIZE_OPT when invoking netperf command. Signed-off-by: Jesper Dangaard Brouer --- shellpack_src/src/netperf/netperf-bench |2 +- shellpacks/shellpack-bench-netperf |2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/shellpack_src/src/netperf/netperf-bench b/shellpack_src/src/netperf/netperf-bench index 8e7d02864c4a..b2820610936e 100755 --- a/shellpack_src/src/netperf/netperf-bench +++ b/shellpack_src/src/netperf/netperf-bench @@ -93,7 +93,7 @@ mmtests_server_ctl start --serverside-name $PROTOCOL-$SIZE -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE diff --git a/shellpacks/shellpack-bench-netperf b/shellpacks/shellpack-bench-netperf index 2ce26ba39f1b..7356082d5a78 100755 --- a/shellpacks/shellpack-bench-netperf +++ b/shellpacks/shellpack-bench-netperf @@ -190,7 +190,7 @@ for ITERATION in `seq 1 $ITERATIONS`; do -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, 2016-12-08 at 09:18 +, Mel Gorman wrote: > Yes, I set it for higher speed networks as a starting point to remind me > to examine rmem_default or socket configurations if any significant packet > loss is observed. Note that your page allocators changes might show more impact with netperf and af_unix (instead of udp)
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, Dec 08, 2016 at 03:48:13PM +0100, Jesper Dangaard Brouer wrote: > On Thu, 8 Dec 2016 11:06:56 + > Mel Gorman wrote: > > > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > > > That's expected. In the initial sniff-test, I saw negligible packet > > > > loss. > > > > I'm waiting to see what the full set of network tests look like before > > > > doing any further adjustments. > > > > > > For netperf I will not recommend adjusting the global default > > > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > > > value from the application (which were the options you setup too low > > > and just removed). I think you should keep this as the default for now > > > (unless Eric says something else), as this should cover most users. > > > > > > > Ok, the current state is that buffer sizes are only set for netperf > > UDP_STREAM and only when running over a real network. The values selected > > were specific to the network I had available so milage may vary. > > localhost is left at the defaults. > > Looks like you made a mistake when re-implementing using buffer sizes > for netperf. We appear to have a disconnect. This was reintroduced in response to your comment "For netperf I will not recommend adjusting the global default /proc/sys/net/core/rmem_default as netperf have means of adjusting this value from the application". My understanding was that netperfs means was the -s and -S switches for send and recv buffers so I reintroduced them and avoided altering [r|w]mem_default. Leaving the defaults resulted in some UDP packet loss on a 10GbE network so some upward adjustment. >From my perspective, either adjusting [r|w]mem_default or specifying -s -S works for the UDP_STREAM issue but using the switches meant only this is affected and other loads like sockperf and netpipe will need to be evaluated separately which I don't mind doing. > See patch below signature. > > Besides I think you misunderstood me, you can adjust: > sysctl net.core.rmem_max > sysctl net.core.wmem_max > > And you should if you plan to use/set 851968 as socket size for UDP > remote tests, else you will be limited to the "max" values (212992 well > actually 425984 2x default value, for reasons I cannot remember) > The intent is to use the larger values to avoid packet loss on UDP_STREAM. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, 8 Dec 2016 11:06:56 + Mel Gorman wrote: > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > > That's expected. In the initial sniff-test, I saw negligible packet loss. > > > I'm waiting to see what the full set of network tests look like before > > > doing any further adjustments. > > > > For netperf I will not recommend adjusting the global default > > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > > value from the application (which were the options you setup too low > > and just removed). I think you should keep this as the default for now > > (unless Eric says something else), as this should cover most users. > > > > Ok, the current state is that buffer sizes are only set for netperf > UDP_STREAM and only when running over a real network. The values selected > were specific to the network I had available so milage may vary. > localhost is left at the defaults. Looks like you made a mistake when re-implementing using buffer sizes for netperf. See patch below signature. Besides I think you misunderstood me, you can adjust: sysctl net.core.rmem_max sysctl net.core.wmem_max And you should if you plan to use/set 851968 as socket size for UDP remote tests, else you will be limited to the "max" values (212992 well actually 425984 2x default value, for reasons I cannot remember) https://github.com/gormanm/mmtests/commit/de9f8cdb7146021 -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer [PATCH] mmtests: actually use variable SOCKETSIZE_OPT From: Jesper Dangaard Brouer commit 7f16226577b2 ("netperf: Set remote and local socket max buffer sizes") removed netperf's setting of the socket buffer sizes and instead used global /proc/sys settings. commit de9f8cdb7146 ("netperf: Only adjust socket sizes for UDP_STREAM") re-added explicit netperf setting socket buffer sizes for remote-host testing (saved in SOCKETSIZE_OPT). Only problem is this variable is not used after commit 7f16226577b2. Simply use $SOCKETSIZE_OPT when invoking netperf command. Signed-off-by: Jesper Dangaard Brouer --- shellpack_src/src/netperf/netperf-bench |2 +- shellpacks/shellpack-bench-netperf |2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/shellpack_src/src/netperf/netperf-bench b/shellpack_src/src/netperf/netperf-bench index 8e7d02864c4a..b2820610936e 100755 --- a/shellpack_src/src/netperf/netperf-bench +++ b/shellpack_src/src/netperf/netperf-bench @@ -93,7 +93,7 @@ mmtests_server_ctl start --serverside-name $PROTOCOL-$SIZE -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE diff --git a/shellpacks/shellpack-bench-netperf b/shellpacks/shellpack-bench-netperf index 2ce26ba39f1b..7356082d5a78 100755 --- a/shellpacks/shellpack-bench-netperf +++ b/shellpacks/shellpack-bench-netperf @@ -190,7 +190,7 @@ for ITERATION in `seq 1 $ITERATIONS`; do -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > That's expected. In the initial sniff-test, I saw negligible packet loss. > > I'm waiting to see what the full set of network tests look like before > > doing any further adjustments. > > For netperf I will not recommend adjusting the global default > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > value from the application (which were the options you setup too low > and just removed). I think you should keep this as the default for now > (unless Eric says something else), as this should cover most users. > Ok, the current state is that buffer sizes are only set for netperf UDP_STREAM and only when running over a real network. The values selected were specific to the network I had available so milage may vary. localhost is left at the defaults. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, 8 Dec 2016 09:18:06 + Mel Gorman wrote: > On Thu, Dec 08, 2016 at 09:22:31AM +0100, Jesper Dangaard Brouer wrote: > > On Wed, 7 Dec 2016 23:25:31 + > > Mel Gorman wrote: > > > > > On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote: > > > > At small packet sizes on localhost, I see relatively low page allocator > > > > activity except during the socket setup and other unrelated activity > > > > (khugepaged, irqbalance, some btrfs stuff) which is curious as it's > > > > less clear why the performance was improved in that case. I considered > > > > the possibility that it was cache hotness of pages but that's not a > > > > good fit. If it was true then the first test would be slow and the rest > > > > relatively fast and I'm not seeing that. The other side-effect is that > > > > all the high-order pages that are allocated at the start are physically > > > > close together but that shouldn't have that big an impact. So for now, > > > > the gain is unexplained even though it happens consistently. > > > > > > > > > > Further investigation led me to conclude that the netperf automation on > > > my side had some methodology errors that could account for an artifically > > > low score in some cases. The netperf automation is years old and would > > > have been developed against a much older and smaller machine which may be > > > why I missed it until I went back looking at exactly what the automation > > > was doing. Minimally in a server/client test on remote maching there was > > > potentially higher packet loss than is acceptable. This would account why > > > some machines "benefitted" while others did not -- there would be boot to > > > boot variations that some machines happened to be "lucky". I believe I've > > > corrected the errors, discarded all the old data and scheduled a rest to > > > see what falls out. > > > > I guess you are talking about setting the netperf socket queue low > > (+256 bytes above msg size), that I pointed out in[1]. > > Primarily, yes. > > > From the same commit[2] I can see you explicitly set (local+remote): > > > > sysctl net.core.rmem_max=16777216 > > sysctl net.core.wmem_max=16777216 > > > > Yes, I set it for higher speed networks as a starting point to remind me > to examine rmem_default or socket configurations if any significant packet > loss is observed. > > > Eric do you have any advice on this setting? > > > > And later[4] you further increase this to 32MiB. Notice that the > > netperf UDP_STREAM test will still use the default value from: > > net.core.rmem_default = 212992. > > > > That's expected. In the initial sniff-test, I saw negligible packet loss. > I'm waiting to see what the full set of network tests look like before > doing any further adjustments. For netperf I will not recommend adjusting the global default /proc/sys/net/core/rmem_default as netperf have means of adjusting this value from the application (which were the options you setup too low and just removed). I think you should keep this as the default for now (unless Eric says something else), as this should cover most users. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Thu, Dec 08, 2016 at 09:22:31AM +0100, Jesper Dangaard Brouer wrote: > On Wed, 7 Dec 2016 23:25:31 + > Mel Gorman wrote: > > > On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote: > > > At small packet sizes on localhost, I see relatively low page allocator > > > activity except during the socket setup and other unrelated activity > > > (khugepaged, irqbalance, some btrfs stuff) which is curious as it's > > > less clear why the performance was improved in that case. I considered > > > the possibility that it was cache hotness of pages but that's not a > > > good fit. If it was true then the first test would be slow and the rest > > > relatively fast and I'm not seeing that. The other side-effect is that > > > all the high-order pages that are allocated at the start are physically > > > close together but that shouldn't have that big an impact. So for now, > > > the gain is unexplained even though it happens consistently. > > > > > > > Further investigation led me to conclude that the netperf automation on > > my side had some methodology errors that could account for an artifically > > low score in some cases. The netperf automation is years old and would > > have been developed against a much older and smaller machine which may be > > why I missed it until I went back looking at exactly what the automation > > was doing. Minimally in a server/client test on remote maching there was > > potentially higher packet loss than is acceptable. This would account why > > some machines "benefitted" while others did not -- there would be boot to > > boot variations that some machines happened to be "lucky". I believe I've > > corrected the errors, discarded all the old data and scheduled a rest to > > see what falls out. > > I guess you are talking about setting the netperf socket queue low > (+256 bytes above msg size), that I pointed out in[1]. Primarily, yes. > From the same commit[2] I can see you explicitly set (local+remote): > > sysctl net.core.rmem_max=16777216 > sysctl net.core.wmem_max=16777216 > Yes, I set it for higher speed networks as a starting point to remind me to examine rmem_default or socket configurations if any significant packet loss is observed. > Eric do you have any advice on this setting? > > And later[4] you further increase this to 32MiB. Notice that the > netperf UDP_STREAM test will still use the default value from: > net.core.rmem_default = 212992. > That's expected. In the initial sniff-test, I saw negligible packet loss. I'm waiting to see what the full set of network tests look like before doing any further adjustments. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 7 Dec 2016 23:25:31 + Mel Gorman wrote: > On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote: > > At small packet sizes on localhost, I see relatively low page allocator > > activity except during the socket setup and other unrelated activity > > (khugepaged, irqbalance, some btrfs stuff) which is curious as it's > > less clear why the performance was improved in that case. I considered > > the possibility that it was cache hotness of pages but that's not a > > good fit. If it was true then the first test would be slow and the rest > > relatively fast and I'm not seeing that. The other side-effect is that > > all the high-order pages that are allocated at the start are physically > > close together but that shouldn't have that big an impact. So for now, > > the gain is unexplained even though it happens consistently. > > > > Further investigation led me to conclude that the netperf automation on > my side had some methodology errors that could account for an artifically > low score in some cases. The netperf automation is years old and would > have been developed against a much older and smaller machine which may be > why I missed it until I went back looking at exactly what the automation > was doing. Minimally in a server/client test on remote maching there was > potentially higher packet loss than is acceptable. This would account why > some machines "benefitted" while others did not -- there would be boot to > boot variations that some machines happened to be "lucky". I believe I've > corrected the errors, discarded all the old data and scheduled a rest to > see what falls out. I guess you are talking about setting the netperf socket queue low (+256 bytes above msg size), that I pointed out in[1]. I can see from GitHub-mmtests-commit[2] "netperf: Set remote and local socket max buffer sizes", that you have removed that, good! :-) >From the same commit[2] I can see you explicitly set (local+remote): sysctl net.core.rmem_max=16777216 sysctl net.core.wmem_max=16777216 Eric do you have any advice on this setting? And later[4] you further increase this to 32MiB. Notice that the netperf UDP_STREAM test will still use the default value from: net.core.rmem_default = 212992. (To Eric) Mel's small UDP queues also interacted badly with Eric and Paolo's UDP improvements, which was fixed in net-next commit[3] 363dc73acacb ("udp: be less conservative with sock rmem accounting"). [1] http://lkml.kernel.org/r/20161201183402.2fbb8...@redhat.com [2] https://github.com/gormanm/mmtests/commit/7f16226577b [3] https://git.kernel.org/davem/net-next/c/363dc73acacb [4] https://github.com/gormanm/mmtests/commit/777d1f5cd08 -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote: > At small packet sizes on localhost, I see relatively low page allocator > activity except during the socket setup and other unrelated activity > (khugepaged, irqbalance, some btrfs stuff) which is curious as it's > less clear why the performance was improved in that case. I considered > the possibility that it was cache hotness of pages but that's not a > good fit. If it was true then the first test would be slow and the rest > relatively fast and I'm not seeing that. The other side-effect is that > all the high-order pages that are allocated at the start are physically > close together but that shouldn't have that big an impact. So for now, > the gain is unexplained even though it happens consistently. > Further investigation led me to conclude that the netperf automation on my side had some methodology errors that could account for an artifically low score in some cases. The netperf automation is years old and would have been developed against a much older and smaller machine which may be why I missed it until I went back looking at exactly what the automation was doing. Minimally in a server/client test on remote maching there was potentially higher packet loss than is acceptable. This would account why some machines "benefitted" while others did not -- there would be boot to boot variations that some machines happened to be "lucky". I believe I've corrected the errors, discarded all the old data and scheduled a rest to see what falls out. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 12:10:24PM -0800, Eric Dumazet wrote: > On Wed, 2016-12-07 at 19:48 +, Mel Gorman wrote: > > > > > > Interesting because it didn't match what I previous measured but then > > again, when I established that netperf on localhost was slab intensive, > > it was also an older kernel. Can you tell me if SLAB or SLUB was enabled > > in your test kernel? > > > > Either that or the baseline I used has since been changed from what you > > are testing and we're not hitting the same paths. > > > lpaa6:~# uname -a > Linux lpaa6 4.9.0-smp-DEV #429 SMP @1481125332 x86_64 GNU/Linux > > lpaa6:~# perf record -g ./netperf -t UDP_STREAM -l 3 -- -m 16384 > MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > localhost () port 0 AF_INET > Socket Message Elapsed Messages > SizeSize Time Okay Errors Throughput > bytes bytessecs# # 10^6bits/sec > > 212992 16384 3.00 654644 028601.04 > 212992 3.00 654592 28598.77 > I'm seeing parts of the disconnect. The load is slab intensive but not necessarily page allocator intensive depending on a variety of factors. While the motivation of the patch was initially SLUB, any path that is high-order page allocator intensive benefits so; 1. If the workload is slab intensive and SLUB is used then it may benefit if SLUB happens to frequently require new pages, particularly if there is a pattern of growing/shrinking slabs frequently. 2. If the workload is high-order page allocator intensive but bypassing SLUB and SLAB, then it'll benefit anyway So you say you don't see much slab activity for some configuration and it's hitting the page allocator. For the purposes of this patch, that's fine albeit useless for a SLAB vs SLUB comparison. Anything else I saw for the moment is probably not surprising; At small packet sizes on localhost, I see relatively low page allocator activity except during the socket setup and other unrelated activity (khugepaged, irqbalance, some btrfs stuff) which is curious as it's less clear why the performance was improved in that case. I considered the possibility that it was cache hotness of pages but that's not a good fit. If it was true then the first test would be slow and the rest relatively fast and I'm not seeing that. The other side-effect is that all the high-order pages that are allocated at the start are physically close together but that shouldn't have that big an impact. So for now, the gain is unexplained even though it happens consistently. At larger message sizes to localhost, it's page allocator intensive through paths like this netperf-3887 [032] 393.246420: mm_page_alloc: page=ea0021272200 pfn=8690824 order=3 migratetype=0 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_REPEAT|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_NOTRACK netperf-3887 [032] 393.246421: => kmalloc_large_node+0x60/0x8d => __kmalloc_node_track_caller+0x245/0x280 => __kmalloc_reserve.isra.35+0x31/0x90 => __alloc_skb+0x7e/0x280 => alloc_skb_with_frags+0x5a/0x1c0 => sock_alloc_send_pskb+0x19e/0x200 => sock_alloc_send_skb+0x18/0x20 => __ip_append_data.isra.46+0x61d/0xa00 => ip_make_skb+0xc2/0x110 => udp_sendmsg+0x2c0/0xa40 => inet_sendmsg+0x7f/0xb0 => sock_sendmsg+0x38/0x50 => SYSC_sendto+0x102/0x190 => SyS_sendto+0xe/0x10 => do_syscall_64+0x5b/0xd0 => return_from_SYSCALL_64+0x0/0x6a It's going through the SLUB paths but finding the allocation is too large and hitting the page allocator instead. This is using 4.9-rc5 as a baseline so fixes might be missing. If using small messages to a remote host, I again see intense page allocator activity via netperf-4326 [047] 994.978387: mm_page_alloc: page=ea0041413400 pfn=17106128 order=2 migratetype=0 gfp_flags=__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_REPEAT|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_NOTRACK netperf-4326 [047] 994.978387: => alloc_pages_current+0x88/0x120 => new_slab+0x33f/0x580 => ___slab_alloc+0x352/0x4d0 => __slab_alloc.isra.73+0x43/0x5e => __kmalloc_node_track_caller+0xba/0x280 => __kmalloc_reserve.isra.35+0x31/0x90 => __alloc_skb+0x7e/0x280 => alloc_skb_with_frags+0x5a/0x1c0 => sock_alloc_send_pskb+0x19e/0x200 => sock_alloc_send_skb+0x18/0x20 => __ip_append_data.isra.46+0x61d/0xa00 => ip_make_skb+0xc2/0x110 => udp_sendmsg+0x2c0/0xa40 => inet_sendmsg+0x7f/0xb0 => sock_sendmsg+0x38/0x50 => SYSC_sendto+0x102/0x190 => SyS_sendto+0xe/0x10 => do_syscall_64+0x5b/0xd0 => return_from_SYSCALL_64+0x0/0x6a This is a slab path, but at different orders. So while the patch was motivated by SLUB, the fact I'm getting intense page allocator activity still benefits. > Maybe one day we will avoid doing order-4 (or even order-5 in extreme > cases !) allocations for loopback as we did for af_unix :P > > I mean, maybe some applications
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 2016-12-07 at 19:48 +, Mel Gorman wrote: > > > Interesting because it didn't match what I previous measured but then > again, when I established that netperf on localhost was slab intensive, > it was also an older kernel. Can you tell me if SLAB or SLUB was enabled > in your test kernel? > > Either that or the baseline I used has since been changed from what you > are testing and we're not hitting the same paths. lpaa6:~# uname -a Linux lpaa6 4.9.0-smp-DEV #429 SMP @1481125332 x86_64 GNU/Linux lpaa6:~# perf record -g ./netperf -t UDP_STREAM -l 3 -- -m 16384 MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost () port 0 AF_INET Socket Message Elapsed Messages SizeSize Time Okay Errors Throughput bytes bytessecs# # 10^6bits/sec 212992 16384 3.00 654644 028601.04 212992 3.00 654592 28598.77 [ perf record: Woken up 5 times to write data ] [ perf record: Captured and wrote 1.888 MB perf.data (~82481 samples) ] perf report --stdio ... 1.92% netperf [kernel.kallsyms] [k] cache_alloc_refill | --- cache_alloc_refill | |--82.22%-- kmem_cache_alloc_node_trace | __kmalloc_node_track_caller | __alloc_skb | alloc_skb_with_frags | sock_alloc_send_pskb | sock_alloc_send_skb | __ip_append_data.isra.50 | ip_make_skb | udp_sendmsg | inet_sendmsg | sock_sendmsg | SYSC_sendto | sys_sendto | entry_SYSCALL_64_fastpath | __sendto_nocancel | | | --100.00%-- 0x0 | Oh wait, sock_alloc_send_skb() requests for all the bytes in skb->head : struct sk_buff *sock_alloc_send_skb(struct sock *sk, unsigned long size, int noblock, int *errcode) { return sock_alloc_send_pskb(sk, size, 0, noblock, errcode, 0); } Maybe one day we will avoid doing order-4 (or even order-5 in extreme cases !) allocations for loopback as we did for af_unix :P I mean, maybe some applications are sending 64KB UDP messages over loopback right now...
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 11:00:49AM -0800, Eric Dumazet wrote: > On Wed, 2016-12-07 at 10:12 +, Mel Gorman wrote: > > > This is the result from netperf running UDP_STREAM on localhost. It was > > selected on the basis that it is slab-intensive and has been the subject > > of previous SLAB vs SLUB comparisons with the caveat that this is not > > testing between two physical hosts. > > > > Interesting results. > > netperf UDP_STREAM is not really slab intensive : (for large sendsizes > like 16KB) > Interesting because it didn't match what I previous measured but then again, when I established that netperf on localhost was slab intensive, it was also an older kernel. Can you tell me if SLAB or SLUB was enabled in your test kernel? Either that or the baseline I used has since been changed from what you are testing and we're not hitting the same paths. > Bulk of the storage should be allocated from alloc_skb_with_frags(), > ie using pages. > > And I am not sure we enabled high order pages in this path ? > > ip_make_skb() > __ip_append_data() > sock_alloc_send_skb() >sock_alloc_send_pskb (..., max_page_order=0) > alloc_skb_with_frags ( max_page_order=0) > It doesn't look like it. While it's not directly related to this patch, can you give the full stack? I'm particularly curious to see if these allocations are in an IRQ path or not. > We probably could enable high-order pages there, if we believe this is > okay. > Ultimately, not a great idea unless you want variable performance depending on whether high-order pages are available or not. The motivation for the patch was primarily for SLUB-intensive workloads. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 2016-12-07 at 11:00 -0800, Eric Dumazet wrote: > > So far, I believe net/unix/af_unix.c uses PAGE_ALLOC_COSTLY_ORDER as > max_order, but UDP does not do that yet. For af_unix, it happened in https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=28d6427109d13b0f447cba5761f88d3548e83605 This came to fix a regression, since we had a gigantic slab allocation in af_unix before https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=eb6a24816b247c0be6b2e97e68933072874bbe54
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 2016-12-07 at 10:12 +, Mel Gorman wrote: > This is the result from netperf running UDP_STREAM on localhost. It was > selected on the basis that it is slab-intensive and has been the subject > of previous SLAB vs SLUB comparisons with the caveat that this is not > testing between two physical hosts. > Interesting results. netperf UDP_STREAM is not really slab intensive : (for large sendsizes like 16KB) Bulk of the storage should be allocated from alloc_skb_with_frags(), ie using pages. And I am not sure we enabled high order pages in this path ? ip_make_skb() __ip_append_data() sock_alloc_send_skb() sock_alloc_send_pskb (..., max_page_order=0) alloc_skb_with_frags ( max_page_order=0) So far, I believe net/unix/af_unix.c uses PAGE_ALLOC_COSTLY_ORDER as max_order, but UDP does not do that yet. We probably could enable high-order pages there, if we believe this is okay. Or maybe I missed and this already happened ? ;) Thanks.
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 11:11:08AM -0600, Christoph Lameter wrote: > On Wed, 7 Dec 2016, Mel Gorman wrote: > > > 3.0-era kernels had better fragmentation control, higher success rates at > > allocation etc. I vaguely recall that it had fewer sources of high-order > > allocations but I don't remember specifics and part of that could be the > > lack of THP at the time. The overhead was massive due to massive stalls > > and excessive reclaim -- hours to complete some high-allocation stress > > tests even if the success rate was high. > > There were a couple of high order page reclaim improvements implemented > at that time that were later abandoned. I think higher order pages were > more available than now. There were, the cost was high -- lumpy reclaim was a major source of the cost but not the only one. The cost of allocation offset any benefit of having them. At least for hugepages it did, I don't know about SLUB because I didn't quantify if the benefit of SLUB using huge pages was offset by the allocation cost (I doubt it). The cost later became intolerable when THP started hitting those paths routinely. It's not simply a case of going back to how fragmentation control was managed then because it'll simply reintroduce excessive stalls in allocation paths. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 7 Dec 2016, Mel Gorman wrote: > 3.0-era kernels had better fragmentation control, higher success rates at > allocation etc. I vaguely recall that it had fewer sources of high-order > allocations but I don't remember specifics and part of that could be the > lack of THP at the time. The overhead was massive due to massive stalls > and excessive reclaim -- hours to complete some high-allocation stress > tests even if the success rate was high. There were a couple of high order page reclaim improvements implemented at that time that were later abandoned. I think higher order pages were more available than now. SLUB was regularly able to get higher order pages.
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 10:40:47AM -0600, Christoph Lameter wrote: > On Wed, 7 Dec 2016, Mel Gorman wrote: > > > Which is related to the fundamentals of fragmentation control in > > general. At some point there will have to be a revisit to get back to > > the type of reliability that existed in 3.0-era without the massive > > overhead it incurred. As stated before, I agree it's important but > > outside the scope of this patch. > > What reliability issues are there? 3.X kernels were better in what > way? Which overhead are we talking about? > 3.0-era kernels had better fragmentation control, higher success rates at allocation etc. I vaguely recall that it had fewer sources of high-order allocations but I don't remember specifics and part of that could be the lack of THP at the time. The overhead was massive due to massive stalls and excessive reclaim -- hours to complete some high-allocation stress tests even if the success rate was high. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 7 Dec 2016, Mel Gorman wrote: > Which is related to the fundamentals of fragmentation control in > general. At some point there will have to be a revisit to get back to > the type of reliability that existed in 3.0-era without the massive > overhead it incurred. As stated before, I agree it's important but > outside the scope of this patch. What reliability issues are there? 3.X kernels were better in what way? Which overhead are we talking about? Fragmentation has been a problem for a long time and the issue gets worse as memory sizes increase, the hardware improves and the expectations on throughput and reliability increase.
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 08:52:27AM -0600, Christoph Lameter wrote: > On Wed, 7 Dec 2016, Mel Gorman wrote: > > > SLUB has been the default small kernel object allocator for quite some time > > but it is not universally used due to performance concerns and a reliance > > on high-order pages. The high-order concerns has two major components -- > > SLUB does not rely on high order pages. It falls back to lower order if > the higher orders are not available. Its a performance concern. > Ok -- While SLUB does not rely on high-order pages for functional correctness, it perfoms better if high-order pages are available. > This is also an issue for various other kernel subsystems that really > would like to have larger contiguous memory area. We are often seeing > performance constraints due to the high number of 4k segments when doing > large scale block I/O f.e. > Which is related to the fundamentals of fragmentation control in general. At some point there will have to be a revisit to get back to the type of reliability that existed in 3.0-era without the massive overhead it incurred. As stated before, I agree it's important but outside the scope of this patch. -- Mel Gorman SUSE Labs
Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, 7 Dec 2016, Mel Gorman wrote: > SLUB has been the default small kernel object allocator for quite some time > but it is not universally used due to performance concerns and a reliance > on high-order pages. The high-order concerns has two major components -- SLUB does not rely on high order pages. It falls back to lower order if the higher orders are not available. Its a performance concern. This is also an issue for various other kernel subsystems that really would like to have larger contiguous memory area. We are often seeing performance constraints due to the high number of 4k segments when doing large scale block I/O f.e. Otherwise I really like what I am seeing here.
[PATCH] mm: page_alloc: High-order per-cpu page allocator v7
After discussions with Joonsoo, I added a guarantee that high-order lists will be drained regardless of batch size. While I maintained it was unnecessary, it also did little harm other than increasing the size of the per-cpu structure. There were slight variations in performance but a mix of gains and losses within the noise relative to the previous release. Changelog since v6 o Guarantee that per-cpu lists are drained regardless of batch size o Dropped patch 1 as Andrew already picked it up Changelog since v5 o Changelog clarification in patch 1 o Additional comments in patch 2 Changelog since v4 o Avoid pcp->count getting out of sync if struct page gets corrupted Changelog since v3 o Allow high-order atomic allocations to use reserves Changelog since v2 o Correct initialisation to avoid -Woverflow warning SLUB has been the default small kernel object allocator for quite some time but it is not universally used due to performance concerns and a reliance on high-order pages. The high-order concerns has two major components -- high-order pages are not always available and high-order page allocations potentially contend on the zone->lock. This patch addresses some concerns about the zone lock contention by extending the per-cpu page allocator to cache high-order pages. The patch makes the following modifications o New per-cpu lists are added to cache the high-order pages. This increases the cache footprint of the per-cpu allocator and overall usage but for some workloads, this will be offset by reduced contention on zone->lock. The first MIGRATE_PCPTYPE entries in the list are per-migratetype. The remaining are high-order caches up to and including PAGE_ALLOC_COSTLY_ORDER o pcp accounting during free is now confined to free_pcppages_bulk as it's impossible for the caller to know exactly how many pages were freed. Due to the high-order caches, the number of pages drained for a request is no longer precise. o The high watermark for per-cpu pages is increased to reduce the probability that a single refill causes a drain on the next free. The benefit depends on both the workload and the machine as ultimately the determining factor is whether cache line bounces on zone->lock or contention is a problem. The patch was tested on a variety of workloads and machines, some of which are reported here. This is the result from netperf running UDP_STREAM on localhost. It was selected on the basis that it is slab-intensive and has been the subject of previous SLAB vs SLUB comparisons with the caveat that this is not testing between two physical hosts. 2-socket modern machine 4.9.0-rc5 4.9.0-rc5 vanilla hopcpu-v7 Hmeansend-64 178.38 ( 0.00%) 263.48 ( 47.71%) Hmeansend-128351.49 ( 0.00%) 523.69 ( 48.99%) Hmeansend-256671.23 ( 0.00%) 1021.92 ( 52.24%) Hmeansend-1024 2663.60 ( 0.00%) 3909.75 ( 46.78%) Hmeansend-2048 5126.53 ( 0.00%) 7365.98 ( 43.68%) Hmeansend-3312 7949.99 ( 0.00%)11077.98 ( 39.35%) Hmeansend-4096 9433.56 ( 0.00%)12715.42 ( 34.79%) Hmeansend-8192 15940.64 ( 0.00%)22322.39 ( 40.03%) Hmeansend-1638426699.54 ( 0.00%)32918.05 ( 23.29%) Hmeanrecv-64 178.38 ( 0.00%) 263.46 ( 47.70%) Hmeanrecv-128351.49 ( 0.00%) 523.65 ( 48.98%) Hmeanrecv-256671.20 ( 0.00%) 1021.54 ( 52.20%) Hmeanrecv-1024 2663.45 ( 0.00%) 3909.13 ( 46.77%) Hmeanrecv-2048 5126.26 ( 0.00%) 7364.61 ( 43.66%) Hmeanrecv-3312 7949.50 ( 0.00%)11076.31 ( 39.33%) Hmeanrecv-4096 9433.04 ( 0.00%)12713.49 ( 34.78%) Hmeanrecv-8192 15939.64 ( 0.00%)22320.05 ( 40.03%) Hmeanrecv-1638426698.44 ( 0.00%)32913.66 ( 23.28%) 1-socket 6 year old machine 4.9.0-rc5 4.9.0-rc5 vanilla hopcpu-v7 Hmeansend-64 87.47 ( 0.00%) 127.41 ( 45.67%) Hmeansend-128174.36 ( 0.00%) 256.71 ( 47.23%) Hmeansend-256347.52 ( 0.00%) 506.40 ( 45.72%) Hmeansend-1024 1363.03 ( 0.00%) 1968.24 ( 44.40%) Hmeansend-2048 2632.68 ( 0.00%) 3742.86 ( 42.17%) Hmeansend-3312 4123.19 ( 0.00%) 5849.80 ( 41.88%) Hmeansend-4096 5056.48 ( 0.00%) 7119.10 ( 40.79%) Hmeansend-8192 8784.22 ( 0.00%)12161.53 ( 38.45%) Hmeansend-1638415081.60 ( 0.00%)19418.36 ( 28.76%) Hmeanrecv-64 86.19 ( 0.00%) 126.84 ( 47.16%) Hmeanrecv-128173.93 ( 0.00%) 255.62 ( 46.96%) Hmeanrecv-256346.19 ( 0.00%) 503.73 ( 45.51%) Hmeanrecv-1024 1358.28 ( 0.00%) 1957.11 ( 44.09%) Hmeanrecv-2048 2623.45 ( 0.00%) 3716.88 ( 41.68%) Hmeanrecv-3312 4108.63