Ricardo Reis wrote:
On Wed, 17 Nov 2010, Gus Correa wrote:

For what is worth, the MPI addresses (a.k.a. pointers)
in the Fortran bindings are integers, of standard size 4 bytes, IIRR.
Take a look at mpif.h, mpi.h and their cousins to make sure.
Unlike the Fortran FFTW "plans", you don't declare MPI addresses as big
as you want, MPI chooses their size when it is built, right?
As Pascal pointed out, 4-byte integers would flip sign at around 2GB,
and even unsigned integers won't go beyond 4GB.
Would this be part of the problem?

yes, I think is the most probable explanation. I've solved it by using several processes to write the file (after all I just didn't want to program a bunch of checkups required for spanning several processes for such a simple thing...)

I guess all the OpenMPI pros and developers are busy now
in Bourbon Street, New Orleans, I mean, at Supercomputer 2010.
Hard to catch their attention right now,
but eventually somebody will clarify this.

oh, just a small grain of sand... doesn't seems worth to stop the full machine for it...

:)

many thanks all


 Ricardo Reis

 'Non Serviam'

 PhD candidate @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 http://www.lasef.ist.utl.pt

 Cultural Instigator @ Rádio Zero
 http://www.radiozero.pt

 Keep them Flying! Ajude a/help Aero Fénix!

 http://www.aeronauta.com/aero.fenix

 http://www.flickr.com/photos/rreis/

 contacts:  gtalk: kyriu...@gmail.com  skype: kyriusan

                           < sent with alpine 2.00 >


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Caro Ricardo

Pascal hit the nail on the head.
Counting with (4-byte) integers seems to be an MPI thing,
written in stone standard perhaps.

In any case, here is an old thread, discussing a related problem,
namely the number of items (count) in MPI_Send/Recv messages,
which is again an integer, hence has the same 2GB limitation:

http://www.open-mpi.org/community/lists/users/2009/02/8100.php

Note that Jeff's workaround suggestion was to declare
a user defined MPI type (or perhaps a hierarchy of types),
then concatenate as much data as needed in a message.

Granted that my knowledge of mpi-io is nil,
I wonder if an approach like this would allow you to get
around the count limit of mpi-io functions,
which sounds no different from the count limit
of other MPI functions.

Say, you could use MPI_TYPE_CONTIGUOUS or MPI_TYPE_VECTOR,
to aggregate big chunks of data (but still smaller than 2GB),
then write a modest number of these chunks/types to the file, I suppose.

Abrac,o,
Gus

Reply via email to