Thank you Dick for your detailed reply,
I am sorry, could you explain more what you meant by "unless you are
calling MPI_Comm_spawn on a single task communicator you would need to
have a different input communicator for each thread that will make an
MPI_Comm_spawn call" , i am confused with th
dear sir
i am sending the details as follows
1. i am using openmpi-1.3.3 and blcr 0.8.2
2. i have installed blcr 0.8.2 first under /root/MS
3. then i installed openmpi 1.3.3 under /root/MS
4 i have configured and installed open mpi as follows
#./configure --with-ft=cr --enable-mpi-threads --wi
dear sir
i am sending the details as follows
1. i am using openmpi-1.3.3 and blcr 0.8.2
2. i have installed blcr 0.8.2 first under /root/MS
3. then i installed openmpi 1.3.3 under /root/MS
4 i have configured and installed open mpi as follows
#./configure --with-ft=cr --enable-mpi-thread
MPI_COMM_SELF is one example. The only task it contains is the local task.
The other case I had in mind is where there is a master doing all spawns.
Master is launched as an MPI "job" but it has only one task. In that
master, even MPI_COMM_WORLD is what I called a "single task communicator".
Be
I'm having a problem running OpenMPI under Torque. It complains like there is
a command syntax problem, but the three variations below are all correct, best
I can tell using mpirun -help. The environment in which the command executes,
i.e. PATH and LD_LIBRARY_PATH, is correct. Torque is 2.3.x
Hi,
I'm using r21970 of the trunk on Linux 2.6.18-3-amd64 and gcc version
4.2.3 (Debian 4.2.3-2).
When I compile open mpi with the default options, it works.
But if I use --with-platform=optimized option, then I get a segfault for
every program I run.
==3073== Access not within mapped reg
One my users recently reported random hangs of his OpenMPI application.
I've run some tests using multiple 2-node 16-core runs of the IMB
benchmark and can occasionally replicate the problem. Looking through
the mail archive, a previous occurrence of this error seems to been
suspect code, but as i
Hello,
I just got some new cluster hardware :) :(
I can't seem to overcome an openib problem
I get this at run time
error polling HP CQ with -2 errno says Success
I've tried 2 different IB switches and multiple sets of nodes all on one
switch or the other to try to eliminate the ha
On Sep 25, 2009, at 7:10 AM, Mallikarjuna Shastry wrote:
dear sir
i am sending the details as follows
1. i am using openmpi-1.3.3 and blcr 0.8.2
2. i have installed blcr 0.8.2 first under /root/MS
3. then i installed openmpi 1.3.3 under /root/MS
4 i have configured and installed open mpi as
It looks like the buffering operations consume about 15% as much time as the
allreduce operations. Not huge, but not trivial, all the same. Is there
any way to avoid the buffering step?
On Thu, Sep 24, 2009 at 6:03 PM, Eugene Loh wrote:
> Greg Fischer wrote:
>
> (I apologize in advance for
On Fri, Sep 25, 2009 at 10:12:33PM -0400, Greg Fischer wrote:
>
>It looks like the buffering operations consume about 15% as much time
>as the allreduce operations. Not huge, but not trivial, all the same.
>Is there any way to avoid the buffering step?
That depends on how you allocat
11 matches
Mail list logo