Le mardi 23 novembre 2010 à 20:21 -0500, Jeff Squyres (jsquyres) a
écrit :
> Beware that MPI-request-free on active buffers is valid but evil. You CANNOT
> be sure when the buffer is available for reuse.
Yes, but as I said, in my program an MPI rank never flood other MPI
ranks.
(I like to thin
Thank you !
Your support is outstanding !
Le mardi 23 novembre 2010 à 22:25 -0500, Eugene Loh a écrit :
> Jeff Squyres (jsquyres) wrote:
>
> >Ya, it sounds like we should fix this eager limit help text so that others
> >aren't misled. We did say "attempt", but that's probably a bit too subtle.
Jeff Squyres (jsquyres) wrote:
Ya, it sounds like we should fix this eager limit help text so that others aren't misled. We did say "attempt", but that's probably a bit too subtle.
Eugene - iirc: this is in the btl base (or some other central location) because it's shared between all btls.
Beware that MPI-request-free on active buffers is valid but evil. You CANNOT be
sure when the buffer is available for reuse.
There was a sentence or paragraph added yo MPI 2.2 describing exactly this
case.
Sent from my PDA. No type good.
On Nov 23, 2010, at 5:36 PM, Sébastien Boisvert
wro
Ya, it sounds like we should fix this eager limit help text so that others
aren't misled. We did say "attempt", but that's probably a bit too subtle.
Eugene - iirc: this is in the btl base (or some other central location) because
it's shared between all btls.
Sent from my PDA. No type good.
Whoa ! Thank, I will try that.
Le mardi 23 novembre 2010 à 18:03 -0500, George Bosilca a écrit :
> If you know the max size of the receives I would take a different approach.
"max size" is the maximum buffer size required, right ?
in my case, it is 4096.
> Post few persistent receives, and man
If you know the max size of the receives I would take a different approach.
Post few persistent receives, and manage them in a circular buffer. Instead of
doing an MPI_Iprobe, use MPI_Test on the current head of your circular buffer.
Once you use the data related to the receive, just do an MPI_S
George Bosilca wrote:
Moreover, eager send can improve performance if and only if the matching
receives are already posted on the peer. If not, the data will become
unexpected, and there will be one additional memcpy.
I don't think the first sentence is strictly true. There is a cost
associ
Sébastien Boisvert wrote:
Le mardi 23 novembre 2010 à 16:07 -0500, Eugene Loh a écrit :
Sébastien Boisvert wrote:
Case 1: 30 MPI ranks, message size is 4096 bytes
File: mpirun-np-30-Program-4096.txt
Outcome: It hangs -- I killed the poor thing after 30 seconds or s
Le mardi 23 novembre 2010 à 17:38 -0500, George Bosilca a écrit :
> The eager size reported by ompi_info includes the Open MPI internal headers.
> They are anywhere between 20 and 64 bytes long (potentially more for some
> particular networks), so what Eugene suggested was a safe boundary.
I see
The eager size reported by ompi_info includes the Open MPI internal headers.
They are anywhere between 20 and 64 bytes long (potentially more for some
particular networks), so what Eugene suggested was a safe boundary.
Moreover, eager send can improve performance if and only if the matching
rec
Le mardi 23 novembre 2010 à 17:28 -0500, George Bosilca a écrit :
> Sebastien,
>
> Using MPI_Isend doesn't guarantee asynchronous progress. As you might be
> aware, the non-blocking communications are guaranteed to progress only when
> the application is in the MPI library. Currently very few MP
Le mardi 23 novembre 2010 à 16:07 -0500, Eugene Loh a écrit :
> Sébastien Boisvert wrote:
>
> >Now I can describe the cases.
> >
> >
> The test cases can all be explained by the test requiring eager messages
> (something that test4096.cpp does not require).
>
> >Case 1: 30 MPI ranks, message s
Sebastien,
Using MPI_Isend doesn't guarantee asynchronous progress. As you might be aware,
the non-blocking communications are guaranteed to progress only when the
application is in the MPI library. Currently very few MPI implementations
progress asynchronously (and unfortunately Open MPI is no
No message is eager if there is congestion. 64K is eager for TCP only if the
kernel buffer has enough room to hold the 64k. For SM it only works if there
are ready buffers. In fact, eager is an optimization of the MPI library, not
something the users should be aware of, or base their application
Le mardi 23 novembre 2010 à 15:17 -0500, Jeff Squyres (jsquyres) a
écrit :
> Sorry for the delay in replying - many of us were at SC last week.
Nothing to be sorry for !
>
> Admittedly, I'm looking at your code on a PDA, so I might be missing some
> things. But I have 2 q's:
You got it all ri
Sébastien Boisvert wrote:
Now I can describe the cases.
The test cases can all be explained by the test requiring eager messages
(something that test4096.cpp does not require).
Case 1: 30 MPI ranks, message size is 4096 bytes
File: mpirun-np-30-Program-4096.txt
Outcome: It hangs -- I kil
To add to Jeff's comments:
Sébastien Boisvert wrote:
The reason is that I am developping an MPI-based software, and I use
Open-MPI as it is the only implementation I am aware of that send
messages eagerly (powerful feature, that is).
As wonderful as OMPI is, I am fairly sure other MPI implem
Hi
I am running into a very wierd problem.
If I initialize MPI normally ie; with MPI_Init(), and make one of the MPI
process to do "popen()" call, I get the following warning/error message:
== Message start ===
An MPI process has executed an operation involving a call to the
"fork()" system cal
Sorry for the delay in replying - many of us were at SC last week.
Admittedly, I'm looking at your code on a PDA, so I might be missing some
things. But I have 2 q's:
1 your send routine doesn't seem to protect from sending to yourself. Correct?
2 you're not using nonblocking sends, which, if
20 matches
Mail list logo