On Thu, Sep 05, 2013 at 01:15:10PM +0200, Ingo Molnar wrote:
>
> * Alex Thorlton wrote:
>
> > > Robin,
> > >
> > > I tweaked one of our other tests to behave pretty much exactly as I
> > > - malloc a large array
> > > - Spawn a specified number of threads
> > > - Have each thread touch small, e
* Alex Thorlton wrote:
> > Robin,
> >
> > I tweaked one of our other tests to behave pretty much exactly as I
> > - malloc a large array
> > - Spawn a specified number of threads
> > - Have each thread touch small, evenly spaced chunks of the array (e.g.
> > for 128 threads, the array is divi
> Robin,
>
> I tweaked one of our other tests to behave pretty much exactly as I
> - malloc a large array
> - Spawn a specified number of threads
> - Have each thread touch small, evenly spaced chunks of the array (e.g.
> for 128 threads, the array is divided into 128 chunks, and each thread
>
On Tue, Aug 27, 2013 at 12:01:01PM -0500, Robin Holt wrote:
> Alex,
>
> Although the explanation seems plausible, have you verified this is
> actually possible? You could make a simple pthread test case which
> allocates a getpagesize() * area, prints its
> address and then each thread migrate a
On Wed, Aug 28, 2013 at 11:18:14AM +0200, Ingo Molnar wrote:
> But ideally THP should detect cases where a hugepage is heavily used from
> multiple nodes and un-HP the page in question. Or not turn it into a
> hugepage in the first place. (We actually have a memory access pattern
> sampling faci
Alex,
Although the explanation seems plausible, have you verified this is
actually possible? You could make a simple pthread test case which
allocates a getpagesize() * area, prints its
address and then each thread migrate and reference their page. Have
the task then sleep() before exit. Look
> Here's more up-to-date version: https://lkml.org/lkml/2012/8/20/337
These don't seem to give us a noticeable performance change either:
With THP:
real22m34.279s
user10797m35.984s
sys 39m18.188s
Without THP:
real4m48.957s
user2118m23.208s
sys 113m12.740s
Looks like we
On Fri, Aug 16, 2013 at 01:17:28PM -0500, Alex Thorlton wrote:
> I actually didn't write these patches (made a few tweaks to get them
> running on the latest kernel though). They were submitted last July by
> Peter Zijlstra.
By Kirill, I don't think I've ever touched them.
> The messages from
On Fri, Aug 16, 2013 at 09:46:35PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 16, 2013 at 01:17:28PM -0500, Alex Thorlton wrote:
> > I actually didn't write these patches (made a few tweaks to get them
> > running on the latest kernel though). They were submitted last July by
> > Peter Zijlstra.
On Fri, Aug 16, 2013 at 01:17:28PM -0500, Alex Thorlton wrote:
> > Could you add some actual descriptions to these patches that say why you
> > are doing this, and why this particular patch is needed and implemented
> > this way?
> >
> > You mention that THP is slow for you, then go on to implemen
> Could you add some actual descriptions to these patches that say why you
> are doing this, and why this particular patch is needed and implemented
> this way?
>
> You mention that THP is slow for you, then go on to implement some
> non-cached page zero'ing, but you never quite connect the dots.
On 08/16/2013 07:33 AM, Alex Thorlton wrote:
> ---
> mm/huge_memory.c | 8
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index a92012a..55ec681 100644
Could you add some actual descriptions to these patches that say why you
are
12 matches
Mail list logo