Hi Jack

> the buffersize is the same in two iterations.

this doesn't help if the message which is sent is larger than
buffersize in the second iteration.
But as David says, without the details of the message sending and
potential changes to the
receive buffer one can't make any precise diagnosis.

jody



On Mon, Nov 1, 2010 at 6:41 PM, Jack Bryan <dtustud...@hotmail.com> wrote:
> thanks
> I use
> double* recvArray  = new double[buffersize];
> The receive buffer size
> MPI::COMM_WORLD.Recv(&(recvDataArray[0]), xVSize, MPI_DOUBLE, 0, mytaskTag);
> delete [] recvArray  ;
> In first iteration, the receiver works well.
> But, in second iteration ,
> I got the
> MPI_ERR_TRUNCATE: message truncated
> the buffersize is the same in two iterations.
>
> ANy help is appreciated.
> thanks
> Nov. 1 2010
>
>> Date: Mon, 1 Nov 2010 08:08:08 +0100
>> From: jody....@gmail.com
>> To: us...@open-mpi.org
>> Subject: Re: [OMPI users] message truncated error
>>
>> Hi Jack
>>
>> Usually MPI_ERR_TRUNCATE means that the buffer you use in MPI_Recv
>> (or MPI::COMM_WORLD.Recv) is too sdmall to hold the message coming in.
>> Check your code to make sure you assign enough memory to your buffers.
>>
>> regards
>> Jody
>>
>>
>> On Mon, Nov 1, 2010 at 7:26 AM, Jack Bryan <dtustud...@hotmail.com> wrote:
>> > HI,
>> > In my MPI program, master send many msaages to another worker with the
>> > same
>> > tag.
>> > The worker uses
>> > s
>> > MPI::COMM_WORLD.Recv(&message_para_to_one_worker, 1,
>> > message_para_to_workers_type, 0, downStreamTaskTag);
>> > to receive the messages
>> > I got error:
>> >
>> > n36:94880] *** An error occurred in MPI_Recv
>> > [n36:94880] *** on communicator MPI_COMM_WORLD
>> > [n36:94880] *** MPI_ERR_TRUNCATE: message truncated
>> > [n36:94880] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
>> > [n36:94880] *** Process received signal ***
>> > [n36:94880] Signal: Segmentation fault (11)
>> > [n36:94880] Signal code: Address not mapped (1)
>> >
>> > Is this (the same tag) the reason for the errors ?
>> > ANy help is appreciated.
>> > thanks
>> > Jack
>> > Oct. 31 2010
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to