Stupid question:

Why not just make your first level internal API equivalent to the MPI
public API except for s/int/size_t/g and have the Fortran bindings
drop directly into that?  Going through the C int-erface seems like a
recipe for endless pain...

Jeff

On Thu, Oct 31, 2013 at 4:05 PM, Jeff Squyres (jsquyres)
<jsquy...@cisco.com> wrote:
> On Oct 30, 2013, at 11:55 PM, Jim Parker <jimparker96...@gmail.com> wrote:
>
>> Perhaps I should start with the most pressing issue for me.  I need 64-bit 
>> indexing
>>
>> @Martin,
>>    you indicated that even if I get this up and running, the MPI library 
>> still uses signed 32-bit ints to count (your term), or index (my term) the 
>> recvbuffer lengths.  More concretely,
>> in a call to MPI_Allgatherv( buffer, count, MPI_Integer, recvbuf, 
>> recv-count, displ, MPI_integer, MPI_COMM_WORLD, status, mpierr): count, 
>> recvcounts, and displs must be  32-bit integers, not 64-bit.  Actually, all 
>> I need is displs to hold 64-bit values...
>> If this is true, then compiling OpenMPI this way is not a solution.  I'll 
>> have to restructure my code to collect 31-bit chunks...
>> Not that it matters, but I'm not using DIRAC, but a custom code to compute 
>> circuit analyses.
>
> Yes, that is correct -- the MPI specification makes us use C "int" for outer 
> level count specifications.  We do use larger than that internally, though.
>
> The common workaround for this is to make your own MPI datatype -- perhaps an 
> MPI_TYPE_VECTOR -- that strings together N contiguous datatypes, and then 
> send M of those.
>
> For example, say you need to send 8B (billion) contiguous INTEGERs.  You 
> obviously can't represent 8B with a C int (or a 4 byte Fortran INTEGER).  So 
> what you would do is something like this (forgive me -- I'm a C guy):
>
> -----
> int my_buffer[8 billion];
> MPI_Datatype my_type;
> // This makes a datatype of 8 contiguous int's
> MPI_Type_vector(1, 8, 0, MPI_INT, &my_type);
> MPI_Type_commit(&my_type);
> MPI_Send(my_buffer, 1048576, my_type, ...);
> -----
>
> This basically sends 1B types that are 8 int's long, and is therefore an 8B 
> int message.
>
> Make sense?
>
>> @Jeff,
>>   Interesting, your runtime behavior has a different error than mine.  You 
>> have problems with the passed variable tempInt, which would make sense for 
>> the reasons you gave.  However, my problem involves the fact that the local 
>> variable "rank" gets overwritten by a memory corruption after MPI_RECV is 
>> called.
>
> Odd.  :-\
>
>> Re: config.log. I will try to have the admin guy recompile tomorrow and see 
>> if I can get the log for you.
>>
>> BTW, I'm using the gcc 4.7.2 compiler suite on a Rocks 5.4 HPC cluster.  I 
>> use the options -m64 and -fdefault-integer-8
>
> Ok.  I was using icc/ifort with -m64 and -i8.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com

Reply via email to