On 11/23/2010 06:47 AM, Barrett, Brian W wrote:
Short answer: we need the "extra" decrement at the end of MPI init.
Long answer: Ok, so I was somewhat wrong :).
(surprised this didn't show up in testing).
Confirmed with our basic pingpong test:
vayu2:~/MPI > mpirun -n 2 ./pp142 | head -
Is there any plan to support NUMA memory binding for tasks?
Even with bind-to-core and memory affinity in 1.4.3 we were seeing 15-20%
variation in run times on a Nehalem cluster. This turned out to be mostly due
to bad page placement. Residual pagecache pages from the last job on a node (or
th
On 12/14/2010 01:29 AM, Jeff Squyres wrote:
On Dec 10, 2010, at 4:56 PM, David Singleton wrote:
Is there any plan to support NUMA memory binding for tasks?
Yes.
For some details on what we're planning for affinity, see the BOF slides that I presented
at SC'10 on the OMPI web s
On 12/14/2010 09:06 AM, Jeff Squyres wrote:
Should we add an MCA parameter to switch between BIND and PREFERRED, and
perhaps default to BIND?
I'm not sure BIND should be the default for everyone - memory imbalanced jobs
might
page badly in this case. But, yes, we would like an MCA to choos
Hi Chris,
Try setting OMPI_MCA_orte_tmpdir_base.
Going back to a related earlier OMPI users thread ("How to set up state-less node /tmp for OpenMPI usage"), here are sm pingpong latencies (using 1.4.3) for
session dirs on Lustre, an SSD and tmpfs:
[dbs900@v1490 ~/MPI]$ export OMPI_MCA_orte_t
tinfo.cgi/devel
___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel
--
------
Dr David Singleton ANU Supercomputer Facility
HPC
Minor issue.
We have a user with a --debug argument to their executable. This argument
is removed when they use mpirun --debug ... ./a.out --debug ...
This is still a problem in 1.5.5. I haven't checked if this has changed
in later code.
David
There appears to have been a change in the behaviour of -npersocket from
1.4.3 to 1.6.x (tested with 1.6.2). Below is what I see on a pair of dual
quad-core socket Nehalem nodes running under PBS. Is this expected?
Thanks
David
[dbs900@v482 ~/MPI]$ mpirun -V
mpirun (Open MPI) 1.4.3
...
[dbs90
Kiril Dichev has already pointed a problem with MPI_Cart_create
http://www.open-mpi.org/community/lists/devel/2009/08/6627.php
MPI_Graph_create has the same problem. I checked all other
functions with logical in arguments and no others do anything
similar.
David
Chris Samuel wrote:
- "Ashley Pittman" wrote:
$ grep Cpus_allowed_list /proc/$$/status
Useful, ta!
Does this imply the default is to report on processes
in the current cpuset rather than the entire system?
Does anyone else feel that violates the principal of
least surprise?
Not reall
der, MPI_Comm *comm_cart) {
+int MPI_Graph_create(MPI_Comm old_comm, int nnodes, int *index,
+ int *edges, int reorder, MPI_Comm *comm_graph)
+{
...
+if ((0 > reorder) || (1 < reorder)) {
David
David Singleton wrote:
Kiril Dichev has already pointed a pro
Our site effectively runs all slurm jobs with sbatch --export=NONE ... and
creates the necessary environment inside the batch script. After upgading
to 14.11, OpenMPI mpirun jobs hit
2015-04-15T08:53:54+08:00 nod0138 slurmstepd[3122]: error: execve(): orted:
No such file or directory
The issue
On Sat, Apr 18, 2015 at 6:27 AM, Paul Hargrove wrote:
>
> The problem here appears to be that the new (--export=NONE) behavior means
> that $PATH and/or $LD_LIBRARY_PATH are not propagated, and thus orted could
> not be found.
> I believe you can configure Open MPI with
> --enable-mpirun-prefix-b
13 matches
Mail list logo