Hi,

sorry for the delay, I was sucked into a hole last week...

Maria Luis Castela <[email protected]> writes:
> Yes! Indeed, the second is the efficient way to store my 2D-slice. 

well, the way for storing (in RAM or in file) for both options is
exactly the same. The only thing that changes is how you call the I/O
writing routines, but if we stick to option 2:

> So, following option 2:
>
> 1- I’ve grouped the processors with coord(3).EQ.1 from original_group:
>
>       MPI_COMM_GROUP(MPI_COMM_WORLD, original_group)
>       MPI_GROUP_INCL(original_group, nb_process_2D_SLICE,
>       processes_2D_SLICE, 2D_group,code)
>       
> 2- I’ve created a MPI communicator for this group:
>
>       MPI_COMM_CREAT(MPI_COMM_WORLD, my_group_2D, MPI_COMM_2D_SLICE, code)
>

OK. I do it with MPI_CART_SUB, but I guess it doesn't matter, as far as
you have a communicator that holds those processors only with coord(3).EQ.1

> —Problem—
>
> When I do: 
>       CALL MPI_COMM_RANK(MPI_COMM_2D_SLICE, 2D_ranks, code)
> I’ve a segmentation fault… Did you have this problem?

I never need to call MPI_COMM_RANK with the 'slice' communicator.

> 3 - You say that instead of MPI_COMM_WORLD I shall use MPI_COMM_2D_SLICE 
> here?:
>               
>               comm = MPI_COMM_2D_SLICE
>               h5pset_fapl_mpio_f(plist_id, comm, info)

the relevant part of my code to make sure that only those at the bottom
of my domain write to file looks like this:

            IF (my_coordinates(3) .EQ. 0) THEN
               CALL h5pcreate_f(H5P_FILE_ACCESS_F, fapl_id, error)
               CALL h5pset_fapl_mpio_f(fapl_id, pmlzcomm, MPI_INFO_NULL,
               error)
               CALL h5fopen_f(H5filename, H5F_ACC_RDWR_F, file_id,
               error, access_prp = fapl_id)

               CALL MPI_BARRIER(pmlzcomm,error)     !!! I only need this
               in one of the clusters we use, don't now why...

               [...] 

               CALL h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE,
               pml_v2%z_low(slx:elx,sly:ely,:,:), dims_pml, error, &
                    file_space_id = fspace_id, mem_space_id = mspace_id,
               xfer_prp = dxpl_id)

               CALL h5sclose_f(fspace_id,error)
               CALL h5sclose_f(mspace_id,error)
               CALL h5dclose_f(dset_id, error)
               CALL h5pclose_f(fapl_id, error)
               CALL h5pclose_f(dxpl_id, error)
               CALL h5fclose_f(file_id, error)
            END IF

Cheers,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/          
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de 
Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning 
the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to