Hi Chao,

I was trying to do this once and our cluster admin told me that parallel hdf5 has some serious issues with NFS mounted file systems that can lead too data corruption. He told me to use serial hdf5, or write to a different filesytem. The second is clearly the better option if possible.

Best,
Matt





On Wed, 9 Dec 2009, pengchao wrote:

Hi all,

I am working with meep-mpi which runs on a Linux cluster, and using NFS
to share the directory for scripts and output.
I found the creating HDF5 file process turned to be very slow, when I
set the output directory to the NFS shared one.
However, if the output directory is set to local directory, it turned to
be normal.
I guess this is because some performance issues within HDF5 parallel
writing on NFS.
Did anyone see this situation before? Thanks for your comments.

The Linux cluster is Ubuntu jaunty with compiled meep 1.1 (mpich 1.2)
NFS exports on node1  as:

/home/share     10.38.0.1/255.255.255.0(rw,sync,no_subtree_check)

and NFS mount on other nodes as:

node1:/home/share     /home/share             nfs
bg,intr,noac         0    0

Here is the output log:
-------
.....
time for set_epsilon = 27.0391 s
-----------
Meep: using output directory "/home/share/node2/20091208n2a2"
creating output file "/home/share/node2/20091208n2a2/hz-slice.h5"...
creating output file "/home/share/node2/20091208n2a2/eps-000000.00.h5"...
---------------------------------------------------------->*stopped
here for a long time (20 mins in my case)*

on time step 1 (time=0.0333333), 60.1328 s/step
on time step 10 (time=0.333333), 0.493924 s/step
on time step 18 (time=0.6), 0.501953 s/step
on time step 26 (time=0.866667), 0.50293 s/step

....
-------------------

BR
Chao




_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss


_______________________________________________
meep-discuss mailing list
meep-discuss@ab-initio.mit.edu
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to