On May 22, 2013, at 9:23 AM, Eric Chamberland <eric.chamberl...@giref.ulaval.ca> wrote:
> On 05/22/2013 11:33 AM, Tom Rosmond wrote: >> Thanks for the confirmation of the MPIIO problem. Interestingly, we >> have the same problem when using MPIIO in INTEL MPI. So something >> fundamental seems to be wrong. >> > > I think but I am not sure that it is because the MPI I/O (ROMIO) code is the > same for all distributions... > > It has been written by Rob Latham. > > Maybe some developers could confirm this? Well, ROMIO was written by Argonne/MPICH (unfair to point the finger solely at Rob) and picked up by pretty much everyone. The issue isn't a bug in MPIIO, but rather in the MPI functional descriptions. They stipulate that the input param be an int, which defaults to 32-bits on the described system. So there is no way to reference anything beyond 32-bits in size. Afraid you'll have to do the multiple reads, or switch to a system that defaults to 64-bit integers. > > Eric > >> T. Rosmond >> >> >> On Wed, 2013-05-22 at 11:21 -0400, Eric Chamberland wrote: >>> I have experienced the same problem.. and worst, I have discovered a bug >>> in MPI I/O... >>> >>> look here: >>> http://trac.mpich.org/projects/mpich/ticket/1742 >>> >>> and here: >>> >>> http://www.open-mpi.org/community/lists/users/2012/10/20511.php >>> >>> Eric >>> >>> On 05/21/2013 03:18 PM, Tom Rosmond wrote: >>>> Hello: >>>> >>>> A colleague and I are running an atmospheric ensemble data assimilation >>>> system using MPIIO. We find that if for an individual >>>> MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the >>>> program fails. Our application is 32 bit fortran (Intel), so we >>>> certainly can see why this might be expected. Is this the case? We >>>> have a workaround by doing multiple reads from the file while moving the >>>> file view, so it isn't a serious problem. >>>> >>>> Thanks for any advice or suggestions >>>> >>>> T. Rosmond >>>> >>>> >>>> >>>> _______________________________________________ >>>> users mailing list >>>> us...@open-mpi.org >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users >>>> >>> > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users