thanks for your reply. 
The message size is 72 bytes. 
The master sends out the message package to each 51 nodes. 
Then, after doing their local work, the worker node send back the same-size 
message to the master. 
Master use vector.push_back(new messageType) to receive each message from 
workers. 
Master use thempi_irecv(workerNodeID, messageTag, bufferVector[row][column])
to receive the worker message. 
the row is the rankID of each worker, the column is index for  message from 
worker.Each worker may send multiple messages to master. 
when the worker node size is large, i got MPI_ERR_TRUNCATE error.
Any help is appreciated. 
JACK
July 10  2010

List-Post: users@lists.open-mpi.org
Date: Sat, 10 Jul 2010 23:12:49 -0700
From: eugene....@oracle.com
To: us...@open-mpi.org
Subject: Re: [OMPI users] OpenMPI how large its buffer size ?






  
  


Jack Bryan wrote:

  
  The master node can receive message ( the same size)  from 50
worker nodes. 
  But, it cannot receive message from 51 nodes. It caused
"truncate error".

How big was the buffer that the program specified in the receive call? 
How big was the message that was sent?



MPI_ERR_TRUNCATE means that you posted a receive with an application
buffer that turned out to be too small to hold the message that was
received.  It's a user application error that has nothing to do with
MPI's internal buffers.  MPI's internal buffers don't need to be big
enough to hold that message.  MPI could require the sender and receiver
to coordinate so that only part of the message is moved at a time.


  

  
  I used the same buffer to get the message in 50 node case. 
  

  
  About ""rendezvous" protocol", what is the meaning of "the
sender sends a short portion "?
  What is the "short portion", is it a small mart of the message
of the sender ?

It's at least the message header (communicator, tag, etc.) so that the
receiver can figure out if this is the expected message or not.  In
practice, there is probably also some data in there as well.  The
amount of that portion depends on the MPI implementation and, in
practice, the interconnect the message traveled over,
MPI-implementation-dependent environment variables set by the user,
etc.  E.g., with OMPI over shared memory by default it's about 4Kbytes
(if I remember correctly).


  This "rendezvous" protocol" can work automatically in background
without programmer
  indicates in his program ?

Right.  MPI actually allows you to force such synchronization with
MPI_Ssend, but typically MPI implementations use it automatically for
"plain" long sends as well even if the user didn't not use MPI_Ssend.


  The "acknowledgement " can be generated by the receiver only
when the
  corresponding mpi_irecv is posted by the receiver ? 

Right.
                                          
_________________________________________________________________
The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with 
Hotmail. 
http://www.windowslive.com/campaign/thenewbusy?tile=multicalendar&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_5

Reply via email to