I think I found the problem, I filed a pr towards master, and if that
passes I will file a pr for the 2.x branch.
Thanks!
Edgar
On 7/8/2016 1:14 PM, Eric Chamberland wrote:
On 08/07/16 01:44 PM, Edgar Gabriel wrote:
ok, but just to be able to construct a test case, basically what you are
do
On 08/07/16 01:44 PM, Edgar Gabriel wrote:
ok, but just to be able to construct a test case, basically what you are
doing is
MPI_File_write_all_begin (fh, NULL, 0, some datatype);
MPI_File_write_all_end (fh, NULL, &status),
is this correct?
Yes, but with 2 processes:
rank 0 writes somethi
ok, but just to be able to construct a test case, basically what you are
doing is
MPI_File_write_all_begin (fh, NULL, 0, some datatype);
MPI_File_write_all_end (fh, NULL, &status),
is this correct?
Thanks
Edgar
On 7/8/2016 12:19 PM, Eric Chamberland wrote:
Hi,
On 08/07/16 12:52 PM, Edgar
Hi,
On 08/07/16 12:52 PM, Edgar Gabriel wrote:
The default MPI I/O library has changed in the 2.x release to OMPIO for
ok, I am now doing I/O on my own hard drive... but I can test over NFS
easily. For Lustre, I will have to produce a reduced example out of our
test suite...
most file sy
The default MPI I/O library has changed in the 2.x release to OMPIO for
most file systems. I can look into that problem, any chance to get
access to the testsuite that you mentioned?
Thanks
Edgar
On 7/8/2016 11:32 AM, Eric Chamberland wrote:
Hi,
I am testing for the first time the 2.X relea
Hi,
I am testing for the first time the 2.X release candidate.
I have a segmentation violation using MPI_File_write_all_end(MPI_File
fh, const void *buf, MPI_Status *status)
The "special" thing, may be that in the faulty test cases, there are
processes that haven't written anything, so they