Thanks, that at least explains what is going on. Because I have an
unbalanced work load (at least for now) I assume that I'll need to poll. If
I replace the compositor loop with the following, it appears that I prevent
the serialization/starvation and service the servers equally. I can think of
edg
George Bosilca wrote:
MPI does not impose any global order on the messages. The only
requirement is that between two peers on the same communicator the
messages (or at least the part required for the matching) is
delivered in order. This make both execution traces you sent with
your origin
Mark,
MPI does not impose any global order on the messages. The only
requirement is that between two peers on the same communicator the
messages (or at least the part required for the matching) is delivered
in order. This make both execution traces you sent with your original
email (share
Thanks, but that won't help. In the real application the messages are at
least 25,000 bytes long, mostly much larger.
Thanks,
Mark
On Fri, Jun 19, 2009 at 1:17 PM, Eugene Loh wrote:
> Mark Bolstad wrote:
>
> I have a small test code that I've managed to duplicate the results from a
>> larger
Mark Bolstad wrote:
I have a small test code that I've managed to duplicate the results
from a larger code. In essence, using the sm btl with ISend, I wind up
with the communication being completely serialized, i.e., all the
calls from process 1 complete, then all from 2, ...
I need to do so
Not that long, 150 lines.
Here it is:
#include
#include
#include
#include
#include
#include
#define BUFLEN 25000
#define LOOPS 10
#define BUFFERS 4
#define GROUP_SIZE 4
int main(int argc, char *argv[])
{
int myid, numprocs, next, namelen;
int color, key, newid;
char buffer[BUFLE
Mark Bolstad wrote:
I'll post the test code if requested (this email is already long)
Yipes, how long is the test code? Short enough to send, yes? Please send.
I have a small test code that I've managed to duplicate the results from a
larger code. In essence, using the sm btl with ISend, I wind up with the
communication being completely serialized, i.e., all the calls from process
1 complete, then all from 2, ...
This is version 1.3.2, vanilla compile. I