Hi,
Thank you for an answer and example codes.
Creating metadata (groups, datasets) is clear now and works fine, but
I've got the last doubt: what in case I'm running 4 MPI processes but
only 3 of them have some data to be written to the given dataset.
Since the H5Dwrite() call is in collective mode, my program hangs...
How to solve this?
Regards,
Rafal
W dniu 2017-09-27 o 22:50, Nelson, Jarom pisze:
Calls that affect the metadata need to be collective so that each
process has a consistent view of what the file metadata should be.
https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html
Something like this (or the attached):
plist_id = H5Pcreate(H5P_FILE_ACCESS);
H5Pset_fapl_mpio(plist_id, comm, info);
H5Pset_all_coll_metadata_ops( plist_id, true );
file_id = H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT, plist_id);
H5Pclose(plist_id);
for(int procid = 0; i < mpi_size; ++i) {
hid_t gr_id = H5Gcreate(file_id, std::to_string(procid).c_str(),
H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);
H5Gclose(gr_id);
}
H5Fclose(file_id);
-----Original Message-----
From: Hdf-forum [mailto:[email protected]] On Behalf
Of Rafal Lichwala
Sent: Wednesday, September 27, 2017 12:32 AM
To: [email protected]
Subject: Re: [Hdf-forum] high level API for parallel version of HDF5 library
Hi Barbara, Hi All,
Thank you for your answer. That's clear now about H5TBmake_table() call,
but...
H5Gcreate() in not a high level API, isn't it?
So why I cannot use it in parallel processes?
Maybe I'm just doing something wrong, so could you please provide me a
short example how to create a set of groups (each one is the process
number) running 4 parallel MPI processes? You can limit the example code
to the sequence of HDF5 calls only...
My current code works fine for just one process, but when I try it for 2
(or more) parallel processes the result file is corrupted:
plist_id = H5Pcreate(H5P_FILE_ACCESS);
H5Pset_fapl_mpio(plist_id, comm, info);
H5Pset_all_coll_metadata_ops( plist_id, true ); file_id =
H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT, plist_id);
H5Pclose(plist_id); hid_t gr_id = H5Gcreate(file_id,
std::to_string(procid).c_str(), H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);
H5Gclose(gr_id); H5Fclose(file_id);
Best regards,
Rafal
W dniu 2017-09-25 o 22:20, Barbara Jones pisze:
> Hi Rafal,
>
> No, the HDF5 High Level APIs are not supported in the parallel
version of HDF5.
>
> -Barbara
> [email protected] <mailto:[email protected]>
>
> -----Original Message-----
> From: Hdf-forum [mailto:[email protected]] On
Behalf Of Rafal Lichwala
> Sent: Monday, September 18, 2017 8:53 AM
> To: [email protected] <mailto:[email protected]>
> Subject: [Hdf-forum] high level API for parallel version of HDF5 library
>
> Hi,
>
> Can I use high level API function calls (H5TBmake_table(...)) in
parallel version of the HDF5 library?
> There are no property list parameters for that function calls...
>
> Regards,
> Rafal
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected] <mailto:[email protected]>
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected] <mailto:[email protected]>
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected] <mailto:[email protected]>
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5
--
***
Rafał Lichwała
Poznańskie Centrum Superkomputerowo-Sieciowe
ul. Jana Pawła II nr 10
61-139 Poznań
e-mail: [email protected]
***
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5