Yep - been fixed in the upcoming 1.4.1 (but there's a problem with the rsh
launcher in the 14 nightly tarballs right now).
A fix has also been submitted upstream to libtool.
Thanks for noticing!
-jms
Sent from my PDA. No type good.
- Original Message -
From: users-boun...@open-mpi.o
It's complicated, but the short answer is that a "short" message is defined as
one where the cost of a memory copy doesn't matter.
You could always use mpi_alloc_mem to get registered memory. But I don't recall
offhand if we check to see if the memory is already registered for short
messages (
I filed a trac ticket:
https://svn.open-mpi.org/trac/ompi/ticket/2153
Götz Waschk wrote:
Hi everyone,
I'm seeing a very strange effect with the openib btl. It seems to slow
down my application even if not used at all. For my test, I am running
a simple application with 8 processes on a
also, you can use -mca btl ^sm which, at least for me, actually gives better
performance than does increasing fifos..
Matt
On Jan 3, 2010, at 10:04 PM, Louis Rossi wrote:
> I am having a problem with BCast hanging on a dual quad core Opteron (2382,
> 2.6GHz, Quad Core, 4 x 512KB L2, 6MB L3 Cac
First, a small comment: use the MPI standard timer MPI_Wtime() instead
of gettimeofday.
I think the problem is that MPI_Sendrecv_replace needs a temporary
buffer. Unless the message is very small, the function uses
MPI_Alloc_mem and MPI_Free_mem to allocate and free this temporary
buffer. W
Hi, guys.
As I understand, to send short MPI messages, OpenMPI copies the
messages to preallocated buffer and then uses RDMA.
I was wondering if we can avoid the overhead of memory copy. If the
user buffers for short messages are reused a lot, we can just register
the user buffer instead of using
Hi!
config/libtool.m4 has a bug when pgi 10 is used.
The lines:
pgCC* | pgcpp*)
# Portland Group C++ compiler
case `$CC -V` in
*pgCC\ [[1-5]]* | *pgcpp\ [[1-5]]*)
matches pgi 10.0 but 10.0 doesn't have the --instantiation_dir flag.
--
Ake Sandgren,
On 01/04/2010 01:23 AM, Eugene Loh wrote:
1) What
about "-mca coll_sync_barrier_before 100"? (The default may be
1000. So, you can try various values less than 1000. I'm suggesting
100.) Note that broadcast has somewhat one-way traffic flow, which can
have some undesirable flow control issu
Lenny Verkhovsky wrote:
have you tried IMB benchmark with Bcast,
I think the problem is in the app.
Presumably not since increasing btl_sm_num_fifos cures the problem.
This appears to be trac 2043 (again)! Note that all processes *do*
enter the broadcasts. The first broadcast
have you tried IMB benchmark with Bcast,
I think the problem is in the app.
All ranks in the communicator should enter Bcast,
since you have
if (rank==0)
else state, not all of them enters the same flow.
if (iRank == 0)
{
iLength = sizeof (acMessage);
MPI_Bcast (&iLength, 1, MPI_INT, 0, MPI_
Dear Roman, dear all,
this error says, that something is very wrong with the trace collection. Do
you know if this error happens in the middle of the run or at the end? To fin
out, please export the following before the run
"export VT_UNIFY=no"
and afterwards run the unification manual
Hi everyone,
I'm seeing a very strange effect with the openib btl. It seems to slow
down my application even if not used at all. For my test, I am running
a simple application with 8 processes on a single node, so openib
should not be used at all. Still, the result with the btl enabled is
much wor
If you're willing to try some stuff:
1) What about "-mca coll_sync_barrier_before 100"? (The default may be
1000. So, you can try various values less than 1000. I'm suggesting
100.) Note that broadcast has somewhat one-way traffic flow, which can
have some undesirable flow control issues.
I am having a problem with BCast hanging on a dual quad core Opteron
(2382, 2.6GHz, Quad Core, 4 x 512KB L2, 6MB L3 Cache) system running
FC11 with openmpi-1.4. The LD_LIBRARY_PATH and PATH variables are
correctly set. I have used the FC11 rpm distribution of openmpi and
built openmpi-1.4 loc
Hello,
I followed the instructions on the FAQ page to configure and compile openmpi
so that it should work with Torque.
./configure --with-tm=/usr/local --prefix=/usr/local
The option --disable-server was used to configure torque on the compute
nodes.
I got openmpi compiled without any error messa
15 matches
Mail list logo