Hi,

Rob Latham <[email protected]> writes:
>> our Fortran code works with double precision data, but recently I have
>> added the possibility of saving the data to files in single
>> precision. In our workstation there is no problem with this and the
>> writing time in either double or single precision is basically the
>> same. But in a cluster that uses GPFS the writing to single precision
>> slows down drastically.
>
> When the type-in-memory is different from the type-in-file, HDF has to break
> collective i/o.  there are some property lists you can interrogate to confirm
> this (H5Pget_mpio_no_collective_cause; returns a bitfield you'll have to parse
> yourself, unless HDF has provided a "flags to string" routine that I do not 
> know
> about)
>
> In such cases you'll see better performance if you convert your type in memory
> then write.

OK, I see. I'll try to figure out some alternative way then, because the
performance penalty when saving as single precision is not acceptable at
the moment.

Thanks,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/          
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de 
Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning 
the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to