Excellent. The bug fix will be in 1.6.1, too.
On May 16, 2012, at 1:26 PM, Ricardo Reis wrote:
>
> all problems gone, thanks for the input and assistance.
>
> cheers,
>
> Ricardo Reis
>
> 'Non Serviam'
>
> PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
>
> Computational Fluid
all problems gone, thanks for the input and assistance.
cheers,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
h
Dear all
I think I find the culprit.
I was calculating my offset using
offset = my_rank*dim
where dim is the array size. Both my_rank and dim are normal integers and
here lies the rub.
Fortran (or should I say gfortran?) multiplies my_rank*dim in integer*4
and then converts to integ
On May 15, 2012, at 4:13 PM, Ricardo Reis wrote:
> printing the result of
>
> bit_size(offset)
>
> does give the value of 64
Ok, good.
> I reckon I had an error in my debug code, I was truncating the output format,
> that explains why I'm chasing a gambuzino with this point.
Or the OMPI bug
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this question.
p
..@open-mpi.org] De la
> part de Jeff Squyres
> Envoyé : 15 mai 2012 14:29
> À : Open MPI Users
> Objet : Re: [OMPI users] MPI-IO puzzlement
>
> On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
>
>>>> INTEGER(kind=MPI_OFFSET_KIND) :: offset
>>>>
&
yves.secre...@ete.inrs.ca
Avant d'imprimer, pensez à l'environnement
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] De la part
de Jeff Squyres
Envoyé : 15 mai 2012 14:29
À : Open MPI Users
Objet : Re: [OMPI users] MPI-IO puzzlement
On May
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
>>> INTEGER(kind=MPI_OFFSET_KIND) :: offset
>>>
>>> MPI_OFFSET_KIND is insuficient to represent my offset...
>>
>> Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this question.
>> There *is* a bug in OMPI at
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI at t
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
> My problem is rather that
>
> INTEGER(kind=MPI_OFFSET_KIND) :: offset
>
> MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI at the moment that we're casting the res
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulen
On May 15, 2012, at 9:30 AM, Secretan Yves wrote:
> If, by any bad luck, you use any of the following FORTRAN function
>
> MPI_FILE_GET_POSITION
> MPI_FILE_GET_SIZE
> MPI_FILE_GET_VIEW
Ouch! Looking at the thread you cited, it looks like George never followed up
on this. :-(
I'll file a bu
sers
Objet : Re: [OMPI users] MPI-IO puzzlement
Hi all
The problem has been found.
I'm trying to use MPI-IO to write the file with all processes taking part in
the calculation writing their bit. Here lies the rub.
Each process has to write a piece of DIM = 35709696 elements.
Us
Hi all
The problem has been found.
I'm trying to use MPI-IO to write the file with all processes taking part
in the calculation writing their bit. Here lies the rub.
Each process has to write a piece of DIM = 35709696 elements.
Using 64 processes the ofsett is my_rank * dim
and so...
what file system is this on?
gluster connected by infiniband. all disks in the same machine, everyone
speaks on infiniband.
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
what file system is this on?
On 5/10/2012 12:37 PM, Ricardo Reis wrote:
>
> what is the communicator that you used to open the file? I am wondering
> whether it differs from the communicator used in MPI_Barrier, and some
> processes do not enter the Barrier at all...
>
> Thanks
> Edgar
>
>
> w
what is the communicator that you used to open the file? I am wondering
whether it differs from the communicator used in MPI_Barrier, and some
processes do not enter the Barrier at all...
Thanks
Edgar
world, I only use one comm on this code.
world = MPI_COMM_WORLD
CALL MPI_file_open(wo
what is the communicator that you used to open the file? I am wondering
whether it differs from the communicator used in MPI_Barrier, and some
processes do not enter the Barrier at all...
Thanks
Edgar
On 5/10/2012 12:22 PM, Ricardo Reis wrote:
>
> Hi all
>
> I'm trying to run my code in a clu
18 matches
Mail list logo