Hmmm...well, nothing definitive there, I'm afraid.
All I can suggest is to remove/reduce the threading. Like I said, we aren't
terribly thread safe at this time. I suspect you're stepping into one of those
non-safe areas here.
Hopefully will do better in later releases.
On Sep 6, 2011, at 1:20
On 09/06/2011 04:58 PM, Ralph Castain wrote:
On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote:
On 09/06/2011 02:57 PM, Ralph Castain wrote:
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
yes, it is threaded. There are basi
On Sep 6, 2011, at 12:49 PM, Simone Pellegrini wrote:
> On 09/06/2011 02:57 PM, Ralph Castain wrote:
>> Hi Simone
>>
>> Just to clarify: is your application threaded? Could you please send the
>> OMPI configure cmd you used?
>
> yes, it is threaded. There are basically 3 threads, 1 for the out
On 09/06/2011 02:57 PM, Ralph Castain wrote:
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
yes, it is threaded. There are basically 3 threads, 1 for the outgoing
messages (MPI_send), 1 for incoming messages (MPI_Iprobe / MPI_R
Hi Simone
Just to clarify: is your application threaded? Could you please send the OMPI
configure cmd you used?
Adding the debug flags just changes the race condition. Interestingly, those
values only impact the behavior of mpirun, so it looks like the race condition
is occurring there.
On S
I think we'll have problems on all machines with Magny-Cours *and*
cpuset/cgroups restricting the number of available processors. Not sure
how widely common this is.
I just checked the hwloc v1.2 branch changelog. Nothing really matters
for OMPI except the patch I sent below (commit v1.2@3767). Th
Brice --
Should I apply that patch to the OMPI 1.5 series, or should we do a hwloc 1.2.2
release? I.e., is this broken on all AMD/Magny-Cours machines?
Should I also do an emergency OMPI 1.5.x release with (essentially) just this
fix? (OMPI 1.5.x currently contains hwloc 1.2.0)
On Sep 6, 20
Are you overwhelming the receiver with short, unexpected messages such that MPI
keeps mallocing and mallocing and mallocing in an attempt to eagerly receive
all the messages? I ask because Open MPI only eagerly sends short messages --
long messages are queued up at the sender and not actually t
Hi Fabien,
I've done some tests these days.
g95 works with Open MPI 1.4.3, you need to start the cmake-gui from a
Visual Studio command prompt in order to grand all correct environment
settings, then after the first time configuration, set
CMAKE_Fortran_COMPILER to the path of g95.exe, and e
Dear all,
I am developing an MPI application which uses heavily MPI_Spawn. Usually
everything works fine for the first hundred spawn but after a while the
application exist with a curious message:
[arch-top:27712] [[36904,165],0] ORTE_ERROR_LOG: Data unpack would read
past end of buffer in fi
Le 05/09/2011 21:29, Brice Goglin a écrit :
> Dear Ake,
> Could you try the attached patch? It's not optimized, but it's probably
> going in the right direction.
> (and don't forget to remove the above comment-out if you tried it).
Actually, now that I've seen your entire topology, I found out tha
Hi,
Thanks for the information.I got understanding like openmpi is a library not
a tool like qperf,correct me if i am wrong.If it is a tool,explain me how to
run it.
On Fri, Sep 2, 2011 at 5:13 PM, Jeff Squyres wrote:
> If I understand you correctly, it sounds like MPI -- overall -- is new to
>
12 matches
Mail list logo