Interesting, I found a small improvement in total clock time through the
area.
I tweaked page_alloc_init_late have a timer, like the
deferred_init_memmap, and this patch showed a small improvement.
Ok thanks for your help.
On 07/06/2015 12:45 PM, Daniel J Blueman wrote:
Hi Nate,
On Wed, Jun
Hi Nate,
On Wed, Jun 24, 2015 at 11:50 PM, Nathan Zimmer wrote:
My apologies for taking so long to get back to this.
I think I did locate two potential sources of slowdown.
One is the set_cpus_allowed_ptr as I have noted previously.
However I only notice that on the very largest boxes.
I did c
On Wed, Jun 24, 2015 at 05:50:28PM -0500, Nathan Zimmer wrote:
> From e18aa6158a60c2134b4eef93c856f3b5b250b122 Mon Sep 17 00:00:00 2001
> From: Nathan Zimmer
> Date: Thu, 11 Jun 2015 10:47:39 -0500
> Subject: [RFC] Avoid the contention in set_cpus_allowed
>
> Noticing some scaling issues at large
On Thu, Jun 25, 2015 at 09:48:55PM +0100, Mel Gorman wrote:
> On Wed, Jun 24, 2015 at 05:50:28PM -0500, Nathan Zimmer wrote:
> > My apologies for taking so long to get back to this.
> >
> > I think I did locate two potential sources of slowdown.
> > One is the set_cpus_allowed_ptr as I have noted
On Thu, Jun 25, 2015 at 09:57:44PM +0100, Mel Gorman wrote:
> On Thu, Jun 25, 2015 at 09:48:55PM +0100, Mel Gorman wrote:
> > On Wed, Jun 24, 2015 at 05:50:28PM -0500, Nathan Zimmer wrote:
> > > My apologies for taking so long to get back to this.
> > >
> > > I think I did locate two potential sou
On Thu, Jun 25, 2015 at 09:48:55PM +0100, Mel Gorman wrote:
> On Wed, Jun 24, 2015 at 05:50:28PM -0500, Nathan Zimmer wrote:
> > My apologies for taking so long to get back to this.
> >
> > I think I did locate two potential sources of slowdown.
> > One is the set_cpus_allowed_ptr as I have noted
On Wed, Jun 24, 2015 at 05:50:28PM -0500, Nathan Zimmer wrote:
> My apologies for taking so long to get back to this.
>
> I think I did locate two potential sources of slowdown.
> One is the set_cpus_allowed_ptr as I have noted previously.
> However I only notice that on the very largest boxes.
>
My apologies for taking so long to get back to this.
I think I did locate two potential sources of slowdown.
One is the set_cpus_allowed_ptr as I have noted previously.
However I only notice that on the very largest boxes.
I did cobble together a patch that seems to help.
The other spot I suspect
--
Daniel J Blueman
Principal Software Engineer, Numascale
On Sat, May 23, 2015 at 1:14 AM, Waiman Long wrote:
On 05/22/2015 05:33 AM, Mel Gorman wrote:
On Fri, May 22, 2015 at 02:30:01PM +0800, Daniel J Blueman wrote:
On Thu, May 14, 2015 at 6:03 PM, Daniel J Blueman
wrote:
On Thu, May 1
On Fri, 2015-05-22 at 13:14 -0400, Waiman Long wrote:
> I think the non-temporal patch benefits mainly AMD systems. I have tried
> the patch on both DragonHawk and it actually made it boot up a little
> bit slower. I think the Intel optimized "rep stosb" instruction (used in
> memset) is perform
On 05/22/2015 05:33 AM, Mel Gorman wrote:
On Fri, May 22, 2015 at 02:30:01PM +0800, Daniel J Blueman wrote:
On Thu, May 14, 2015 at 6:03 PM, Daniel J Blueman
wrote:
On Thu, May 14, 2015 at 12:31 AM, Mel Gorman wrote:
On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
I am just notice
On Fri, May 22, 2015 at 02:30:01PM +0800, Daniel J Blueman wrote:
> On Thu, May 14, 2015 at 6:03 PM, Daniel J Blueman
> wrote:
> >On Thu, May 14, 2015 at 12:31 AM, Mel Gorman wrote:
> >>On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
> >>> I am just noticed a hang on my largest box.
> >>
On Thu, May 14, 2015 at 6:03 PM, Daniel J Blueman
wrote:
On Thu, May 14, 2015 at 12:31 AM, Mel Gorman wrote:
On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
I am just noticed a hang on my largest box.
I can only reproduce with large core counts, if I turn down the
number of cpus i
On Tue, May 19, 2015 at 01:31:28PM -0500, nzimmer wrote:
> After double checking the patches it seems everything is ok.
>
> I had to rerun quite a bit since the machine was reconfigured and I
> wanted to be thorough.
> My latest timings are quite close to my previous reported numbers.
>
> The han
After double checking the patches it seems everything is ok.
I had to rerun quite a bit since the machine was reconfigured and I
wanted to be thorough.
My latest timings are quite close to my previous reported numbers.
The hang issue I encountered turned out to be unrelated to these patches
s
Well I did get in some tests yesterday afternoon. And with some simple
timers found that occasionally it took huge amount of time in this
snippit at the top of
static int __init deferred_init_memmap(void *data)
/* Bind memory initialisation thread to a local node if possible */
i
On Thu, May 14, 2015 at 12:31 AM, Mel Gorman wrote:
On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
I am just noticed a hang on my largest box.
I can only reproduce with large core counts, if I turn down the
number of cpus it doesn't have an issue.
Odd. The number of core counts
On Wed, May 13, 2015 at 10:53:33AM -0500, nzimmer wrote:
> I am just noticed a hang on my largest box.
> I can only reproduce with large core counts, if I turn down the
> number of cpus it doesn't have an issue.
>
Odd. The number of core counts should make little a difference as only
one CPU per
I am just noticed a hang on my largest box.
I can only reproduce with large core counts, if I turn down the number
of cpus it doesn't have an issue.
Also as time goes on the amount of time required to initialize pages
goes up.
log_uv48_05121052:[ 177.250385] node 0 initialised, 14950072 pa
On Thu, 7 May 2015 23:52:26 +0100 Mel Gorman wrote:
> As for the patch sequencing, I'm ok
> with adding the patch on top if you are because that preserves the testing
> history. If you're unhappy, I can shuffle it into a better place and resend
> the full series that includes all the fixes so fa
On Thu, May 07, 2015 at 03:09:32PM -0700, Andrew Morton wrote:
> On Thu, 7 May 2015 08:25:18 +0100 Mel Gorman wrote:
>
> > Waiman Long reported that 24TB machines hit OOM during basic setup when
> > struct page initialisation was deferred. One approach is to initialise
> > memory
> > on demand b
On Thu, 7 May 2015 08:25:18 +0100 Mel Gorman wrote:
> Waiman Long reported that 24TB machines hit OOM during basic setup when
> struct page initialisation was deferred. One approach is to initialise memory
> on demand but it interferes with page allocator paths. This patch creates
> dedicated thr
22 matches
Mail list logo