Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
The subarray creation is an multi-dimension extension of the vector type. You can see it as a vector of vector of vector and so on, one vector per dimension. The stride array is used to declare on each dimension what is the relative displacement (in number of elements) from the beginning of the

Re: [OMPI users] libevent hangs on app finalize stage

2015-01-16 Thread Leonid
Yes, it works now. Thanks for operative support. On 15.01.2015 21:50, Ralph Castain wrote: Fixed - sorry about that! On Jan 15, 2015, at 10:39 AM, Ralph Castain wrote: Ah, indeed - I found the problem. Fix coming momentarily On Jan 15, 2015, at 10:31 AM, Ralph Castain wrote: Hmmm…I’m n

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear All, I'm sorry to insist, but I am not able to understand. Moreover, I have realized that I have to explain myself better. I try to explain in may program. Each CPU has *npt* particles. My program understand how many particles each CPU has to send, according to their positions. Then I can do:

Re: [OMPI users] OpenMPI 1.8.4rc3, 1.6.5 and 1.6.3: segmentation violation in mca_io_romio_dist_MPI_File_close

2015-01-16 Thread Eric Chamberland
On 01/14/2015 05:57 PM, Rob Latham wrote: On 12/17/2014 07:04 PM, Eric Chamberland wrote: Hi! Here is a "poor man's fix" that works for me (the idea is not from me, thanks to Thomas H.): #1- char* lCwd = getcwd(0,0); #2- chdir(lPathToFile); #3- MPI_File_open(...,lFileNameWithoutTooLongPath,

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
The operation you describe is a pack operation, agglomerating together in a contiguous buffer originally discontinuous elements. As a result there is no need to use the MPI_TYPE_VECTOR, but instead you can just use the type you created so far (MPI_my_STRUCT) with a count. George. On Fri, Jan 1

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear George, Dear all, I have been studying. It's clear for 2D case QQ(:,:). For example if real :: QQ(npt,9) , with 9 the characteristic of each particles. I can simple: call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL, my_2D_type, ierr) I send 50 element of QQ. I am in fortran so a two dimens

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, Could I use MPI_PACK? Diego On 16 January 2015 at 16:26, Diego Avesani wrote: > Dear George, Dear all, > > I have been studying. It's clear for 2D case QQ(:,:). > > For example if > real :: QQ(npt,9) , with 9 the characteristic of each particles. > > I can simple: > > call MPI_TY

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
You could but you don’t need to. The datatype engine of Open MPI is doing a fair job of packing/unpacking the data on the flight, so you don’t have to. George. > On Jan 16, 2015, at 11:32 , Diego Avesani wrote: > > Dear all, > > Could I use MPI_PACK? > > > Diego > > > On 16 January 201

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear George, Dear All, and what do you think about the previous post? Thanks again Diego On 16 January 2015 at 18:11, George Bosilca wrote: > You could but you don’t need to. The datatype engine of Open MPI is doing > a fair job of packing/unpacking the data on the flight, so you don’t have

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear All, in the attachment the 2D example, Now I will try the 3D example. What do you think of it? is it correct? The idea is to build a 2D data_type, to sent 3D data Diego On 16 January 2015 at 18:19, Diego Avesani wrote: > Dear George, Dear All, > > and what do you think about the previous

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, here the 3D example, but unfortunately it does not work. I believe that there is some problem with the stride. What do you think? Thanks again to everyone Diego On 16 January 2015 at 19:20, Diego Avesani wrote: > Dear All, > in the attachment the 2D example, Now I will try the 3D e

[OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Gus Correa
Hi George It is still not clear to me how to deal with strides in MPI_Create_type_subarray. The function/subroutine interface doesn't mention strides at all. It is a pity that there is little literature (books) about MPI, and the existing books are lagging behind the new MPI developments and

[OMPI users] Problem with connecting to 3 or more nodes

2015-01-16 Thread Chan, Elbert
Hi I'm hoping that someone will be able to help me figure out a problem with connecting to multiple nodes with v1.8.4. Currently, I'm running into this issue: $ mpirun --host host1 hostname host1 $ mpirun --host host2,host3 hostname host2 host3 Running this command on 1 or 2 nodes generates t

Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
Gus, Please see my answers inline. > On Jan 16, 2015, at 14:24 , Gus Correa wrote: > > Hi George > > It is still not clear to me how to deal with strides in > MPI_Create_type_subarray. > The function/subroutine interface doesn’t mention strides at all. That’s indeed a little tricky. However,

Re: [OMPI users] Problem with connecting to 3 or more nodes

2015-01-16 Thread Jeff Squyres (jsquyres)
It's because Open MPI uses a tree-based ssh startup pattern. (amusingly enough, I'm literally half way through writing up a blog entry about this exact same issue :-) ) That is, not only does Open MPI ssh from your mpirun-server to host1, Open MPI may also ssh from host1 to host2 (or host1 to h

Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Gus Correa
Hi George Many thanks for your answer and interest in my questions. ... so ... more questions inline ... On 01/16/2015 03:41 PM, George Bosilca wrote: Gus, Please see my answers inline. On Jan 16, 2015, at 14:24 , Gus Correa wrote: Hi George It is still not clear to me how to deal with st

Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, Dear Gus, Dear George, have you seen my example program? (in the attachment) As you suggested I have tried to *think **recursively about the datatypes* but there is something wrong that I am not bale to understand, what do you think? thanks a lot Diego On 16 January 2015 at 23:23, Gu