Why is this allocated statically? I dont understand the difficulty of a
dynamically allocates and thus unrestricted implementation. Is there some
performance advantage to a bounded static allocation? Or is it that you
use O(n) lookups and need to keep n small to avoid exposing that to users?
I ha
Thanks, I think that will be very useful.
Best,
Udayanga
On Wed, Jan 9, 2019 at 1:39 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> You can set this MCA var on a site-wide basis in a file:
>
> https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
>
>
>
Jeff:
You're welcome.
Not a problem.
I was trying to email somebody more directly about this with this recommended
change only because I just encountered this problem, where a bit of time,
trying to figure out why OpenFOAM on OpenMPI, trying to get it running across
the nodes wasn't working.
You can set this MCA var on a site-wide basis in a file:
https://www.open-mpi.org/faq/?category=tuning#setting-mca-params
> On Jan 9, 2019, at 1:18 PM, Udayanga Wickramasinghe wrote:
>
> Thanks. Yes, I am aware of that however, I currently have a requirement to
> increase the default.
>
Thanks. Yes, I am aware of that however, I currently have a requirement to
increase the default.
Best,
Udayanga
On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:
> If you need to support more attachments you can set the value of that
> variable either by se
Good suggestion; thank you!
> On Jan 8, 2019, at 9:44 PM, Ewen Chan wrote:
>
> To Whom It May Concern:
>
> Hello. I'm new here and I got here via OpenFOAM.
>
> In the FAQ regarding running OpenMPI programs, specifically where someone
> might be able to run their OpenMPI program on a local no
Eduardo,
The first part of the configure command line is for an install in /usr, but
then there is ‘—prefix=/opt/openmpi/4.0.0’ and this is very fishy.
You should also use ‘—with-hwloc=external’.
How many nodes are you running on and which interconnect are you using ?
What if you
mpirun —mca pml
If you need to support more attachments you can set the value of that variable
either by setting:
Environment:
OMPI_MCA_osc_rdma_max_attach
mpirun command line:
—mca osc_rdma_max_attach
Keep in mind that each attachment may use an underlying hardware resource that
may be easy to exhaust (h
Hi.
I'm testing Open MPI 4.0.0 and I'm struggling with a weird behaviour. In a very
simple example (very frustrating). I'm having the following error returned by
MPI_Send:
[gafront4:25692] *** An error occurred in MPI_Send
[gafront4:25692] *** reported by process [3152019457