Ricardo Reis wrote:
On Wed, 17 Nov 2010, Pascal Deveze wrote:

I think the limit for a write (and also for a read) is 2^31-1 (2G-1). In a C program, after this value, an integer becomes negative. I suppose this is also true in Fortran. The solution, is to make a loop of writes (reads) of no more than this value.

Is that MPI-IO specific? I remember that when using FFTW they ask for using INTEGER(8) for the returning handle. This is used has a pointer interface with the library and (8) will be equivalent to a 64 bit pointer (sort of, sorry if I am not being exact).

Anyway, if I have no problems writing Big files with normal Fortran shouldn't this behaviour be found with MPI-IO? And, more to the point, if not, shouldn't it be documented somewhere?

Does anyone knows if this carries over to other MPI implementations (or the answer is "download, try it and tell us?")

best,



For what is worth, the MPI addresses (a.k.a. pointers)
in the Fortran bindings are integers, of standard size 4 bytes, IIRR.
Take a look at mpif.h, mpi.h and their cousins to make sure.
Unlike the Fortran FFTW "plans", you don't declare MPI addresses as big
as you want, MPI chooses their size when it is built, right?
As Pascal pointed out, 4-byte integers would flip sign at around 2GB,
and even unsigned integers won't go beyond 4GB.
Would this be part of the problem?

I guess all the OpenMPI pros and developers are busy now
in Bourbon Street, New Orleans, I mean, at Supercomputer 2010.
Hard to catch their attention right now,
but eventually somebody will clarify this.

Gus

 Ricardo Reis

 'Non Serviam'

 PhD candidate @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 http://www.lasef.ist.utl.pt

 Cultural Instigator @ Rádio Zero
 http://www.radiozero.pt

 Keep them Flying! Ajude a/help Aero Fénix!

 http://www.aeronauta.com/aero.fenix

 http://www.flickr.com/photos/rreis/

 contacts:  gtalk: kyriu...@gmail.com  skype: kyriusan

                           < sent with alpine 2.00 >


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to