Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread George Bosilca
Or use MPI_Type_match_size to find the right type. George. > On Jan 8, 2015, at 19:05 , Gus Correa wrote: > > Hi Diego > > *EITHER* > declare your QQ and PR (?) structure components as DOUBLE PRECISION > *OR* > keep them REAL(dp) but *fix* your "dp" definition, as George Bosilca > suggested

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Gus Correa
Hi Diego *EITHER* declare your QQ and PR (?) structure components as DOUBLE PRECISION *OR* keep them REAL(dp) but *fix* your "dp" definition, as George Bosilca suggested. Gus Correa On 01/08/2015 06:36 PM, Diego Avesani wrote: Dear Gus, Dear All, so are you suggesting to use DOUBLE PRECISION

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread George Bosilca
I'm confused by this statement. The examples pointed to are handling blocking sends and receives, while this example is purely based on non-blocking communications. In this particular case I see no hard of waiting on the requests in any random order as long as all of them are posted before the firs

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear Jeff, Dear George, Dear all, Is not send_request a vector? Are you suggesting to use CALL MPI_WAIT(REQUEST(:), MPI_STATUS_IGNORE, MPIdata%iErr) I will try tomorrow morning, and also to fix the sending and receiving allocate deallocate, Problaly I will have to think again to the program. I w

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
Dear Gus, Dear All, so are you suggesting to use DOUBLE PRECISION and not REAL(dp)? Thanks again Diego On 9 January 2015 at 00:02, Gus Correa wrote: > On 01/08/2015 05:50 PM, Diego Avesani wrote: > >> Dear George, Dear all, >> what are the other issues? >> >> Why did you put in selected_real_k

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Gus Correa
On 01/08/2015 05:50 PM, Diego Avesani wrote: Dear George, Dear all, what are the other issues? Why did you put in selected_real_kind(15, 307) the number 307 Hi Diego That is the Fortran 90 (and later) syntax for selected_real_kind. The first number is the number of digits in the mantissa, th

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
Dear George, Dear all, what are the other issues? Why did you put in selected_real_kind(15, 307) the number 307 Thanks again Diego On 8 January 2015 at 23:24, George Bosilca wrote: > Diego, > > Please find below the corrected example. There were several issues but the > most important one, w

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread George Bosilca
Diego, Please find below the corrected example. There were several issues but the most important one, which is certainly the cause of the segfault, is that "real(dp)" (with dp = selected_real_kind(p=16)) is NOT equal to MPI_DOUBLE_RECISION. For double precision you should use 15 (and not 16). G

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Jeff Squyres (jsquyres)
Also, you are calling WAITALL on all your sends and then WAITALL on all your receives. This is also incorrect and may deadlock. WAITALL on *all* your pending requests (sends and receives -- put them all in a single array). Look at examples 3.8 and 3.9 in the MPI-3.0 document. On Jan 8, 2015

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread George Bosilca
Diego, Non-blocking communications only indicate a communication will happen, it does not force them to happen. They will only complete on the corresponding MPI_Wait, which also marks the moment starting from where the data can be safely altered or accessed (in the case of the MPI_Irecv). Thus dea

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear Tom, Dear Jeff, Dear all, Thanks again for Tom: you are right, I fixed it. for Jeff: if I do not insert the CALL MPI_BARRIER(MPI_COMM_WORLD, MPIdata%iErr) in the line 112, the program does not stop. Am I right? Here the new version Diego On 8 January 2015 at 21:12, Tom Rosmond wrote:

Re: [OMPI users] libpsm_infinipath issues?

2015-01-08 Thread Gus Correa
Hi Michael, Andrew, list knem is doesn't work in OMPI 1.8.3. See this thread: http://www.open-mpi.org/community/lists/users/2014/10/25511.php A fix was promised on OMPI 1.8.4: http://www.open-mpi.org/software/ompi/v1.8/ Have you tried it? I hope this helps, Gus Correa On 01/08/2015 04:36 PM,

Re: [OMPI users] libpsm_infinipath issues?

2015-01-08 Thread Friedley, Andrew
Hi Mike, Have you contacted your admins, or the vendor that provided your True Scale and/or PSM installation? E.g. Redhat, or Intel via ibsupp...@intel.com? They are normally the recommended path for True Scale support. That said, here's some things to look into:

[OMPI users] -fgnu89-inline needed to avoid "multiple definition of `lstat64'" error

2015-01-08 Thread Jesse Ziser
Hello, When building OpenMPI 1.8.4 on Linux using gcc 4.8.2, the build fails for me with errors like: romio/.libs/libromio_dist.a(delete.o): In function `lstat64': delete.c:(.text+0x0): multiple definition of `lstat64' romio/.libs/libromio_dist.a(close.o):close.c:(.text+0x0): first defined her

[OMPI users] libpsm_infinipath issues?

2015-01-08 Thread VanEss.Michael
Hello all, Our clusters were just upgraded to both a new version of PGI (14.9) as well as openmpi (1.8.3). Previous versions were 12.1 and 1.6 respectively, and those compiled and linked just fine. The newest versions are not linking my mpi applications at all. Here's the problem: /opt/scyl

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Tom Rosmond
With array bounds checking your program returns an out-of-bounds error in the mpi_isend call at line 104. Looks like 'send_request' should be indexed with 'sendcount', not 'icount'. T. Rosmond On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote: > the attachment > > Diego > > > > On 8 J

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
the attachment Diego On 8 January 2015 at 19:44, Diego Avesani wrote: > Dear all, > I found the error. > There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). > In the attachment there is the correct version of the program. > > Only one thing, could do you check if the use of MPI_WAITALL >

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Jeff Squyres (jsquyres)
What do you need the barriers for? On Jan 8, 2015, at 1:44 PM, Diego Avesani wrote: > Dear all, > I found the error. > There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). > In the attachment there is the correct version of the program. > > Only one thing, could do you check if the use of

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear all, I found the error. There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). In the attachment there is the correct version of the program. Only one thing, could do you check if the use of MPI_WAITALL and MPI_BARRIER is correct? Thanks again Diego On 8 January 2015 at 18:48, Diego

[OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear all, thanks thank a lot, I am learning a lot. I have written a simple program that send vectors of integers from a CPU to another. The program is written (at least for now) for 4 CPU. The program is quite simple: Each CPU knows how many data has to send to the other CPUs. This info is than

Re: [OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Jeff Squyres (jsquyres)
There were still some minor errors left over in the attached program. I strongly encourage you to use "use mpi" instead of "include 'mpif.h'" because you will get compile time errors when you pass incorrect / forget to pass parameters to MPI subroutines. When I switched your program to "use mpi

Re: [OMPI users] OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Gilles Gouaillardet
Diego, yes, it works for me (at least with the v1.8 head and gnu compilers) Cheers, Gilles On 2015/01/08 17:51, Diego Avesani wrote: > Dear Gilles, > thanks again, however it does not work. > > the program says: "SIGSEGV, segmentation fault occurred" > > Does the program run in your case? > >

Re: [OMPI users] difference of behaviour for MPI_Publish_name between openmpi-1.4.5 and openmpi-1.8.4

2015-01-08 Thread Bernard Secher
Thanks Gilles! That works with this MPI_Info given to MPI_Publish_name. Cheers, Bernard Le 08/01/2015 03:47, Gilles Gouaillardet a écrit : Well, per the source code, this is not a bug but a feature : from publish function from ompi/mca/pubsub/orte/pubsub_orte.c ompi_info_get_bool(info,

Re: [OMPI users] OMPI users] OMPI users] MPI_Type_Create_Struct + MPI_TYPE_CREATE_RESIZED

2015-01-08 Thread Diego Avesani
Dear Gilles, thanks again, however it does not work. the program says: "SIGSEGV, segmentation fault occurred" Does the program run in your case? Thanks again Diego On 8 January 2015 at 03:02, Gilles Gouaillardet < gilles.gouaillar...@iferc.org> wrote: > Diego, > > my bad, i should have p

Re: [OMPI users] Icreasing OFED registerable memory

2015-01-08 Thread Waleed Lotfy
You are right. I didn't know that SGE used limits other than '/etc/security/limits.conf', even though you explained it :/ The resolution is by adding 'H_MEMORYLOCKED=unlimited' in the execd_params. Thank you all for your time and efforts and keep up the great work :) Waleed Lotfy Bibliotheca A