Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-29 Thread Tom Rosmond
On Mon, 2011-08-29 at 14:22 -0500, Rob Latham wrote: > On Mon, Aug 22, 2011 at 08:38:52AM -0700, Tom Rosmond wrote: > > Yes, we are using collective I/O (mpi_file_write_at_all, > > mpi_file_read_at_all). The swaping of fortran and mpi-io are just > > branches in the code at strategic locations.

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-29 Thread Rob Latham
On Mon, Aug 22, 2011 at 08:38:52AM -0700, Tom Rosmond wrote: > Yes, we are using collective I/O (mpi_file_write_at_all, > mpi_file_read_at_all). The swaping of fortran and mpi-io are just > branches in the code at strategic locations. Although the mpi-io files > are readable with fortran direct a

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-22 Thread Tom Rosmond
On Mon, 2011-08-22 at 10:23 -0500, Rob Latham wrote: > On Thu, Aug 18, 2011 at 08:46:46AM -0700, Tom Rosmond wrote: > > We have a large fortran application designed to run doing IO with either > > mpi_io or fortran direct access. On a linux workstation (16 AMD cores) > > running openmpi 1.5.3 and

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-22 Thread Rob Latham
On Thu, Aug 18, 2011 at 08:46:46AM -0700, Tom Rosmond wrote: > We have a large fortran application designed to run doing IO with either > mpi_io or fortran direct access. On a linux workstation (16 AMD cores) > running openmpi 1.5.3 and Intel fortran 12.0 we are having trouble with > random failur

[OMPI users] MPIIO and EXT3 file systems

2011-08-18 Thread Tom Rosmond
We have a large fortran application designed to run doing IO with either mpi_io or fortran direct access. On a linux workstation (16 AMD cores) running openmpi 1.5.3 and Intel fortran 12.0 we are having trouble with random failures with the mpi_io option which do not occur with conventional fortra