Hi Jeffy,
I checked the SEND RECV buffers and it looks ok to me. The code I have sent
works only when I statically initialize the array.
The code fails everytime I use malloc to initialize the array.
Can you please look at code and let me know what is wrong.
On Wed, Apr 18, 2012 at 8:11 PM, Jef
I do not see where you initialize the offset on the "Non-master tasks".
This could be the problem.
Pascal
users-boun...@open-mpi.org a écrit sur 19/04/2012 09:18:31 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 09:18
> Objet : Re: [OMPI users] machine exited on signal 11 (
Hi Pascal,
The offset is received from the master task. so no need to initialize for
non master tasks?
not sure what u meant exactly.
Thanks
On Thu, Apr 19, 2012 at 3:36 PM, wrote:
> I do not see where you initialize the offset on the "Non-master tasks".
> This could be the problem.
>
> Pas
Send the most recent version of your code.
Have you tried running it through a memory-checking debugger, like valgrind?
On Apr 19, 2012, at 4:24 AM, Rohan Deshpande wrote:
> Hi Pascal,
>
> The offset is received from the master task. so no need to initialize for non
> master tasks?
>
> not
Dear mail-list users,
I have a problem when I try to run a parallel gromacs job on fedora core 15.
The same job (same installation options and network-setup) for fedora core 13
works fine. I already tried it in a fedora forum but I could not find a
solution there ...
[terminal output start]
No I havent tried using valgrind.
here is the latest code -
#include "mpi.h"
#include
#include
#include
#define ARRAYSIZE 20
#define MASTER 0
int *data;
int main(int argc, char* argv[])
{
int numtasks, taskid, rc, dest, offset, i, j, tag1, tag2, source,
chunksize, namelen;
int mys
Hello everybody,
I am measuring some timings for MPI_Send/MPI_Recv. I am doing a single
communication between 2 processes and I repeat this several times to get
meaningful values. The message being sent varies from 64 bytes up to 16
MBs, doubling the size each time (64, 128, 256,8M, 16M).
users-boun...@open-mpi.org a écrit sur 19/04/2012 10:24:16 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 10:24
> Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation
fault).
> Envoyé par : users-boun...@open-mpi.org
>
> Hi Pascal,
>
> The offset is received
users-boun...@open-mpi.org a écrit sur 19/04/2012 12:42:44 :
> De : Rohan Deshpande
> A : Open MPI Users
> Date : 19/04/2012 12:44
> Objet : Re: [OMPI users] machine exited on signal 11 (Segmentation
fault).
> Envoyé par : users-boun...@open-mpi.org
>
> No I havent tried using valgrind.
>
>
In addition to what Pascal said, you should definitely run your code through a
memory-checking debugger (e.g., valgrind).
On Apr 19, 2012, at 7:21 AM, pascal.dev...@bull.net wrote:
> users-boun...@open-mpi.org a écrit sur 19/04/2012 10:24:16 :
>
> > De : Rohan Deshpande
> > A : Open MPI User
What happens if you "dig quoVadis27"?
If you don't get a valid answer back, then it's not a resolvable name.
On Apr 19, 2012, at 6:42 AM, Bernhard Knapp wrote:
> Dear mail-list users,
>
> I have a problem when I try to run a parallel gromacs job on fedora core 15.
> The same job (same install
dig returns the following:
[terminal output start]
[bknapp@quoVadis27 ~]$ dig quoVadis27
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.2.rc1.fc15 <<>> quoVadis27
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57978
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHO
Simone,
I think most of the issues with the numbers you're getting are coming from the
internal protocols of Open MPI and the way the compilers "optimize" the memcpy
function. In fact the memcpy function translate to different execution path
based on the size of the data. For large memory copie
13 matches
Mail list logo