Dear all,
thanks a lot,
really Thanks a lot
Diego
On 9 January 2015 at 19:56, Jeff Squyres (jsquyres)
wrote:
> On Jan 9, 2015, at 1:54 PM, Diego Avesani wrote:
>
> > What does it mean "YMMV"?
>
> http://netforbeginners.about.com/od/xyz/f/What-Is-YMMV.htm
>
> :-)
>
> --
> Jeff Squyres
> jsquy.
On Jan 9, 2015, at 1:54 PM, Diego Avesani wrote:
> What does it mean "YMMV"?
http://netforbeginners.about.com/od/xyz/f/What-Is-YMMV.htm
:-)
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/
What does it mean "YMMV"?
On 9 January 2015 at 19:44, Jeff Squyres (jsquyres)
wrote:
> YMMV
Diego
On Jan 9, 2015, at 12:39 PM, George Bosilca wrote:
> I totally agree with Dave here. Moreover, based on the logic exposed by Jeff,
> there is no right solution because if one choose to first wait on the receive
> requests this also leads to a deadlock as the send requests might not be
> progr
Dear Jeff, Dear George, Dear Dave, Dear all,
so, is it correct to use *MPI_Waitall *?
Is my program ok now? Do you see other problems?
Thanks again
Diego
On 9 January 2015 at 18:39, George Bosilca wrote:
> I totally agree with Dave here. Moreover, based on the logic exposed by
> Jeff, there
I totally agree with Dave here. Moreover, based on the logic exposed by
Jeff, there is no right solution because if one choose to first wait on the
receive requests this also leads to a deadlock as the send requests might
not be progressed.
As a side note, posting the receive requests first minim
On Jan 9, 2015, at 7:46 AM, Jeff Squyres (jsquyres) wrote:
> Yes, I know examples 3.8/3.9 are blocking examples.
>
> But it's morally the same as:
>
> MPI_WAITALL(send_requests...)
> MPI_WAITALL(recv_requests...)
>
> Strictly speaking, that can deadlock, too.
>
> It reality, it has far less
Yes, I know examples 3.8/3.9 are blocking examples.
But it's morally the same as:
MPI_WAITALL(send_requests...)
MPI_WAITALL(recv_requests...)
Strictly speaking, that can deadlock, too.
It reality, it has far less chance of deadlocking than examples 3.8 and 3.9
(because you're likely within t
Dear George, Dear Jeff, Dear All,
Thanks Thanks a lot
Here, the new version of the program. Now there is only one barrier. There
is no more allocate\deallocate in the receive part.
What do you think? Is all right? did I miss something or I need to improve
something else?
I have not complete und
I'm confused by this statement. The examples pointed to are handling
blocking sends and receives, while this example is purely based on
non-blocking communications. In this particular case I see no hard of
waiting on the requests in any random order as long as all of them are
posted before the firs
Dear Jeff, Dear George, Dear all,
Is not send_request a vector?
Are you suggesting to use CALL MPI_WAIT(REQUEST(:), MPI_STATUS_IGNORE,
MPIdata%iErr)
I will try tomorrow morning, and also to fix the sending and receiving
allocate deallocate, Problaly I will have to think again to the program.
I w
Also, you are calling WAITALL on all your sends and then WAITALL on all your
receives. This is also incorrect and may deadlock.
WAITALL on *all* your pending requests (sends and receives -- put them all in a
single array).
Look at examples 3.8 and 3.9 in the MPI-3.0 document.
On Jan 8, 2015
Diego,
Non-blocking communications only indicate a communication will happen, it
does not force them to happen. They will only complete on the corresponding
MPI_Wait, which also marks the moment starting from where the data can be
safely altered or accessed (in the case of the MPI_Irecv). Thus
dea
Dear Tom, Dear Jeff, Dear all,
Thanks again
for Tom:
you are right, I fixed it.
for Jeff:
if I do not insert the CALL MPI_BARRIER(MPI_COMM_WORLD, MPIdata%iErr)
in the line 112, the program does not stop.
Am I right?
Here the new version
Diego
On 8 January 2015 at 21:12, Tom Rosmond wrote:
With array bounds checking your program returns an out-of-bounds error
in the mpi_isend call at line 104. Looks like 'send_request' should be
indexed with 'sendcount', not 'icount'.
T. Rosmond
On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote:
> the attachment
>
> Diego
>
>
>
> On 8 J
the attachment
Diego
On 8 January 2015 at 19:44, Diego Avesani wrote:
> Dear all,
> I found the error.
> There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU).
> In the attachment there is the correct version of the program.
>
> Only one thing, could do you check if the use of MPI_WAITALL
>
What do you need the barriers for?
On Jan 8, 2015, at 1:44 PM, Diego Avesani wrote:
> Dear all,
> I found the error.
> There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU).
> In the attachment there is the correct version of the program.
>
> Only one thing, could do you check if the use of
Dear all,
I found the error.
There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU).
In the attachment there is the correct version of the program.
Only one thing, could do you check if the use of MPI_WAITALL
and MPI_BARRIER is correct?
Thanks again
Diego
On 8 January 2015 at 18:48, Diego
Dear all,
thanks thank a lot, I am learning a lot.
I have written a simple program that send vectors of integers from a CPU to
another.
The program is written (at least for now) for 4 CPU.
The program is quite simple:
Each CPU knows how many data has to send to the other CPUs. This info is
than
19 matches
Mail list logo