On Thu, Dec 08, 2016 at 06:19:51PM +0100, Jesper Dangaard Brouer wrote:
> > > See patch below signature.
> > >
> > > Besides I think you misunderstood me, you can adjust:
> > > sysctl net.core.rmem_max
> > > sysctl net.core.wmem_max
> > >
> > > And you should if you plan to use/set 851968 as so
On Thu, 8 Dec 2016 15:11:01 +
Mel Gorman wrote:
> On Thu, Dec 08, 2016 at 03:48:13PM +0100, Jesper Dangaard Brouer wrote:
> > On Thu, 8 Dec 2016 11:06:56 +
> > Mel Gorman wrote:
> >
> > > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote:
> > > > > That's expect
On Thu, 2016-12-08 at 09:18 +, Mel Gorman wrote:
> Yes, I set it for higher speed networks as a starting point to remind me
> to examine rmem_default or socket configurations if any significant packet
> loss is observed.
Note that your page allocators changes might show more impact with
netpe
On Thu, Dec 08, 2016 at 03:48:13PM +0100, Jesper Dangaard Brouer wrote:
> On Thu, 8 Dec 2016 11:06:56 +
> Mel Gorman wrote:
>
> > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote:
> > > > That's expected. In the initial sniff-test, I saw negligible packet
> > > > loss.
On Thu, 8 Dec 2016 11:06:56 +
Mel Gorman wrote:
> On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote:
> > > That's expected. In the initial sniff-test, I saw negligible packet loss.
> > > I'm waiting to see what the full set of network tests look like before
> > > doing an
On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote:
> > That's expected. In the initial sniff-test, I saw negligible packet loss.
> > I'm waiting to see what the full set of network tests look like before
> > doing any further adjustments.
>
> For netperf I will not recommend a
On Thu, 8 Dec 2016 09:18:06 +
Mel Gorman wrote:
> On Thu, Dec 08, 2016 at 09:22:31AM +0100, Jesper Dangaard Brouer wrote:
> > On Wed, 7 Dec 2016 23:25:31 +
> > Mel Gorman wrote:
> >
> > > On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote:
> > > > At small packet sizes on lo
On Thu, Dec 08, 2016 at 09:22:31AM +0100, Jesper Dangaard Brouer wrote:
> On Wed, 7 Dec 2016 23:25:31 +
> Mel Gorman wrote:
>
> > On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote:
> > > At small packet sizes on localhost, I see relatively low page allocator
> > > activity except dur
On Wed, 7 Dec 2016 23:25:31 +
Mel Gorman wrote:
> On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote:
> > At small packet sizes on localhost, I see relatively low page allocator
> > activity except during the socket setup and other unrelated activity
> > (khugepaged, irqbalance, some
On Wed, Dec 07, 2016 at 09:19:58PM +, Mel Gorman wrote:
> At small packet sizes on localhost, I see relatively low page allocator
> activity except during the socket setup and other unrelated activity
> (khugepaged, irqbalance, some btrfs stuff) which is curious as it's
> less clear why the per
On Wed, Dec 07, 2016 at 12:10:24PM -0800, Eric Dumazet wrote:
> On Wed, 2016-12-07 at 19:48 +, Mel Gorman wrote:
> >
> >
> > Interesting because it didn't match what I previous measured but then
> > again, when I established that netperf on localhost was slab intensive,
> > it was also an ol
On Wed, 2016-12-07 at 19:48 +, Mel Gorman wrote:
>
>
> Interesting because it didn't match what I previous measured but then
> again, when I established that netperf on localhost was slab intensive,
> it was also an older kernel. Can you tell me if SLAB or SLUB was enabled
> in your test ker
On Wed, Dec 07, 2016 at 11:00:49AM -0800, Eric Dumazet wrote:
> On Wed, 2016-12-07 at 10:12 +, Mel Gorman wrote:
>
> > This is the result from netperf running UDP_STREAM on localhost. It was
> > selected on the basis that it is slab-intensive and has been the subject
> > of previous SLAB vs SL
On Wed, 2016-12-07 at 11:00 -0800, Eric Dumazet wrote:
>
> So far, I believe net/unix/af_unix.c uses PAGE_ALLOC_COSTLY_ORDER as
> max_order, but UDP does not do that yet.
For af_unix, it happened in
https://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=28d6427109d13b0f447cb
On Wed, 2016-12-07 at 10:12 +, Mel Gorman wrote:
> This is the result from netperf running UDP_STREAM on localhost. It was
> selected on the basis that it is slab-intensive and has been the subject
> of previous SLAB vs SLUB comparisons with the caveat that this is not
> testing between two ph
On Wed, Dec 07, 2016 at 11:11:08AM -0600, Christoph Lameter wrote:
> On Wed, 7 Dec 2016, Mel Gorman wrote:
>
> > 3.0-era kernels had better fragmentation control, higher success rates at
> > allocation etc. I vaguely recall that it had fewer sources of high-order
> > allocations but I don't rememb
On Wed, 7 Dec 2016, Mel Gorman wrote:
> 3.0-era kernels had better fragmentation control, higher success rates at
> allocation etc. I vaguely recall that it had fewer sources of high-order
> allocations but I don't remember specifics and part of that could be the
> lack of THP at the time. The ove
On Wed, Dec 07, 2016 at 10:40:47AM -0600, Christoph Lameter wrote:
> On Wed, 7 Dec 2016, Mel Gorman wrote:
>
> > Which is related to the fundamentals of fragmentation control in
> > general. At some point there will have to be a revisit to get back to
> > the type of reliability that existed in 3.
On Wed, 7 Dec 2016, Mel Gorman wrote:
> Which is related to the fundamentals of fragmentation control in
> general. At some point there will have to be a revisit to get back to
> the type of reliability that existed in 3.0-era without the massive
> overhead it incurred. As stated before, I agree i
On Wed, Dec 07, 2016 at 08:52:27AM -0600, Christoph Lameter wrote:
> On Wed, 7 Dec 2016, Mel Gorman wrote:
>
> > SLUB has been the default small kernel object allocator for quite some time
> > but it is not universally used due to performance concerns and a reliance
> > on high-order pages. The hi
On Wed, 7 Dec 2016, Mel Gorman wrote:
> SLUB has been the default small kernel object allocator for quite some time
> but it is not universally used due to performance concerns and a reliance
> on high-order pages. The high-order concerns has two major components --
SLUB does not rely on high ord
After discussions with Joonsoo, I added a guarantee that high-order
lists will be drained regardless of batch size. While I maintained it was
unnecessary, it also did little harm other than increasing the size of
the per-cpu structure. There were slight variations in performance but a
mix of gains
22 matches
Mail list logo