Re: [OMPI users] Can't read more than 2^31 bytes with MPI_File_read, regardless of type?

2012-08-07 Thread Rayson Ho
I originally thought that it was an issue related to 32-bit executables, but it seems to affect 64-bit as well... I found references to this problem -- it was reported back in 2007: http://lists.mcs.anl.gov/pipermail/mpich-discuss/2007-July/002600.html If you look at the code, you will find

Re: [OMPI users] Can't read more than 2^31 bytes with MPI_File_read, regardless of type?

2012-08-07 Thread Richard Shaw
On Tuesday, 7 August, 2012 at 12:21 PM, Rob Latham wrote: > Hi. Known problem in the ROMIO MPI-IO implementation (which OpenMPI > uses). Been on my list of "things to fix" for a while. Ok, thanks. I'm glad it's not just us. Is there a timescale for this being fixed? Because if it's a long term

Re: [OMPI users] Parallel I/O doesn't work for derived datatypes with Fortran 90 interface

2012-08-07 Thread Jeff Squyres
On Aug 7, 2012, at 3:13 PM, Paul Romano wrote: > Thanks for your response Jeff. My offset is of kind MPI_OFFSET_KIND which > leads me to believe it is the derived type that is causing the compilation > error. I'm also able to successfully compile and run the same code using > MPICH2. That's

Re: [OMPI users] Using MPI derived datatypes

2012-08-07 Thread Jeff Squyres
On Aug 3, 2012, at 7:36 AM, Grzegorz Maj wrote: > I would like my MPI processes to exchange some structural data. That > data is represented by plain structures containing basic datatypes. I > would like to use MPI derived datatypes, because of its portability > and good performance. > > I would

Re: [OMPI users] Parallel I/O doesn't work for derived datatypes with Fortran 90 interface

2012-08-07 Thread Paul Romano
Thanks for your response Jeff. My offset is of kind MPI_OFFSET_KIND which leads me to believe it is the derived type that is causing the compilation error. I'm also able to successfully compile and run the same code using MPICH2. Out of curiousity, how is it that some of the more standard MPI

Re: [OMPI users] Parallel I/O doesn't work for derived datatypes with Fortran 90 interface

2012-08-07 Thread Jeff Squyres
A common misunderstanding with this subroutine is that offset must be an INTEGER(KIND=MPI_OFFSET_KIND). OMPI 1.6's F90 interface (and earlier versions) won't work with derived datatypes as the buffer, either. OMPI 1.7 has a wholly-reimplemented "mpi" module that allows derived datatypes as

Re: [OMPI users] Can't read more than 2^31 bytes with MPI_File_read, regardless of type?

2012-08-07 Thread Rob Latham
On Thu, Jul 12, 2012 at 10:53:52AM -0400, Jonathan Dursi wrote: > Hi: > > One of our users is reporting trouble reading large files with > MPI_File_read (or read_all). With a few different type sizes, to > keep count lower than 2^31, the problem persists. A simple C > program to test this is

Re: [OMPI users] 1D and 2D arrays allocate memory by maloc() and MPI_Send and MPI_Recv problem.

2012-08-07 Thread Zbigniew Koza
Look at this declaration: int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) here*"count" is the**number of elements* (not bytes!) in the send buffer (nonnegative integer) Your "count" was defined as count = rows*matrix_size*sizeof

Re: [OMPI users] 1D and 2D arrays allocate memory by maloc() and MPI_Send and MPI_Recv problem.

2012-08-07 Thread Paweł Jaromin
Hi 2012/8/7 George Bosilca : > All MPI operations (including MPI_Send and MPI_Recv) consider any type of > buffers (input and output) as a contiguous entity. I tried use 1D array (instead of 2D) to have contiguous data - but result was the same :( > > Therefore, you have

Re: [OMPI users] 1D and 2D arrays allocate memory by maloc() and MPI_Send and MPI_Recv problem.

2012-08-07 Thread George Bosilca
All MPI operations (including MPI_Send and MPI_Recv) consider any type of buffers (input and output) as a contiguous entity. Therefore, you have two options: 1. Have a loop around MPI_Send & MPI_Recv similar to the allocation section. 2. Build an MPI Datatype representing the non-contiguous

[OMPI users] 1D and 2D arrays allocate memory by maloc() and MPI_Send and MPI_Recv problem.

2012-08-07 Thread Paweł Jaromin
Hello all Sorry, may be this is a stupid question, bat a have a big problem with maloc() and matrix arrays. I want to make a program that do very simple thing like matriA * matrixB = matrixC. Because I need more matrix size than 100x100 (5000x5000), I have to use maloc() for memory allocation.