(e.g., Linux CMA, Linux KNEM, XPMEM)
Does that help?
> On Mar 10, 2016, at 12:25 PM, BRADLEY, PETER C PW
> wrote:
>
> This is an academic exercise, obviously. The curve shown comes from one pair
> of ranks running on the same node alternating between MPI_Send and MPI_Recv.
>
Thursday, March 10, 2016, BRADLEY, PETER C PW
wrote:
> I’m curious what causes the hump in the pingpong bandwidth curve when
> running on shared memory. Here’s an example running on a fairly antiquated
> single-socket 4 core laptop with linux (2.6.32 kernel). Is this a cache
I'm curious what causes the hump in the pingpong bandwidth curve when running
on shared memory. Here's an example running on a fairly antiquated
single-socket 4 core laptop with linux (2.6.32 kernel). Is this a cache
effect? Something in OpenMPI itself, or a combination?
[Macintosh HD:Users
CG solvers make use of dot products and other loops whose results may not be
exactly the same depending on how those operations are performed serially or in
parallel. As the solver iterates, those differences *may* stack up. However
it’s also really easy to write a subtle bug that causes the s
We’re seeing some abnormal performance behavior when running an OpenMPI 1.4.4
application on RH6.4 using Mellanox OFED 1.5.3. Under certain circumstances,
system CPU starts dominating and performance tails off severely. This behavior
does not happen with the same job run with TCP. Is there
I just wanted to report back that path 2824 is working great!
Thanks,
Pete
From: BRADLEY, PETER C PW
Sent: Sunday, July 10, 2011 8:58 PM
To: 'us...@open-mpi.org'
Subject: max entries in procgroup file for OpenMPI 1.5?
I know 1.4.x has a limit of 128 entries for procg
I know 1.4.x has a limit of 128 entries for procgroup files. To avoid some
ugly surgery on a legacy application, we'd really like to have the ability to
put up to 1024 lines in a procgroup file? Has the limit been raised at all in
1.5? could it be?
Pete
Pete Bradley
High Performance