Ok, I agree that it is likely not due to the set up of the scratch directory.

What version of ifort was used? If you happened to use 16.0.3.210, maybe it is caused by an ifort bug [ https://software.intel.com/en-us/articles/read-failure-unformatted-file-io-psxe-16-update-3 ].

Perhaps you can use the linux "od" command to try to troubleshot and identify what the data mismatch is between the writing and reading of the 3Mn.vectordn_1 file, similar to what is described on the web pages at:

https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x/topic/269993
https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x/topic/270436
https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux-and-mac-os-x/topic/268503

Though, it might be harder to diagnose with the large 3Mn.vectordn_1, which looks to be about 12 GB. So you may want to create a mpi SO calculation that creates a smaller case.vectordn_1 for that.

On 11/13/2016 7:30 AM, Md. Fhokrul Islam wrote:

Hi Gavin,


In my .bashrc scratch is defined as $SCRATCH = ./ so if I use the command

echo $SCRATCH, it always returns ./


For large jobs, I use local temporary directory that is associated with each node

in our system and is given by $SNIC_TMP. This temporary directory is created

on fly, so I set $SCRATCH = $SNIC_TMP in my job submission script. As I said

this set up works fine if I do MPI calculations without spin-orbit and I get converged

results. But if I submit the job after initializing with spin-orbit, it crashes at lapwso.

SO I think problem is probably not due to the set up with scratch directory, it is

something to do with MPI version of LAPWSO.



Thanks for your comment.


Fhokrul

_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to