I am performing Prefix scan operation on cluster
I have 3 MPI tasks and master task is responsible for distributing the data
Now, each task calculates sum of its own part of array using GPUs and
returns the results to master task.
Master task also calculates its own part of array using GPU.
Whe
Hi Ralph,
Sorry for the false alarm, and thanks for the tip:
... version confusion where the mpirun being used doesn't match the backend
daemons.
Yes, my test environment was wonky. All is well now.
On May 14, 2012, at 3:41 PM, David Turner wrote:
...
[c0667:24962] [[39579,1],11] ORTE_E
On May 15, 2012, at 4:13 PM, Ricardo Reis wrote:
> printing the result of
>
> bit_size(offset)
>
> does give the value of 64
Ok, good.
> I reckon I had an error in my debug code, I was truncating the output format,
> that explains why I'm chasing a gambuzino with this point.
Or the OMPI bug
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this question.
p
More specifically -- are you calling any MPI function before calling MPI_Init?
(particularly from Fortran)
I.e., you may be seeing a side-effect of calling a fortran MPI function before
calling MPI_Init.
On May 13, 2012, at 9:53 AM, Ralph Castain wrote:
> I believe the error message is pretty
On May 14, 2012, at 11:33 AM, vaibhav dutt wrote:
> Can anybody tell me how to enable the polling and interrupt/blocking
> execution in
> OpenMPI?
Open MPI doesn't have a blocking mode; it always aggressively polls.
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
htt
Oops -- yes, I missed FILE_GET_POSITION_SHARED. I amended the patch:
https://svn.open-mpi.org/trac/ompi/raw-attachment/ticket/3095/fortran-file-int-cast-fix2.diff
MPI_Address is also a deprecated function exactly because it takes an fint --
MPI_Get_address replaced it, and uses an MPI_Aint.
O
Hy,
The patch contains changes for mpi_file_get_view, mpi_file_get_position and
mpi_file_get_size.
I think mpi_file_get_position_shared is another candidate, and maybe
mpi_address but this last one I am not sure
In my first response, I did only list the function I use.
Yves Secretan
yves.secre
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
>>> INTEGER(kind=MPI_OFFSET_KIND) :: offset
>>>
>>> MPI_OFFSET_KIND is insuficient to represent my offset...
>>
>> Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this question.
>> There *is* a bug in OMPI at
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI at t
See https://bugzilla.redhat.com/show_bug.cgi?id=814798
$ mpicc -showme:link
-pthread -m64 -L/usr/lib64/openmpi/lib -lmpi -ldl -lhwloc
-ldl and -lhwloc should not be listed. The user should only link against
libraries that they are using directly, namely -lmpi, and they should
explicitly add -
You are absolutely correct -- doh!
I've fixed it on the trunk and submitted the fix for the v1.6 branch.
Thanks!
On May 14, 2012, at 9:14 AM, Götz Waschk wrote:
> Dear Open-MPI developers,
>
> I have built my own package of openmpi 1.6 based on the RHEL6 package
> on my SL6 test machine. My t
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
> My problem is rather that
>
> INTEGER(kind=MPI_OFFSET_KIND) :: offset
>
> MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI at the moment that we're casting the res
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulen
On May 15, 2012, at 9:30 AM, Secretan Yves wrote:
> If, by any bad luck, you use any of the following FORTRAN function
>
> MPI_FILE_GET_POSITION
> MPI_FILE_GET_SIZE
> MPI_FILE_GET_VIEW
Ouch! Looking at the thread you cited, it looks like George never followed up
on this. :-(
I'll file a bu
Hy,
If, by any bad luck, you use any of the following FORTRAN function
MPI_FILE_GET_POSITION
MPI_FILE_GET_SIZE
MPI_FILE_GET_VIEW
MPI_TYPE_EXTENT
they all are stiil overflowing
(http://www.open-mpi.org/community/lists/devel/2010/12/8797.php) because they
cast the correct result to MPI_Fint w
Hi all
The problem has been found.
I'm trying to use MPI-IO to write the file with all processes taking part
in the calculation writing their bit. Here lies the rub.
Each process has to write a piece of DIM = 35709696 elements.
Using 64 processes the ofsett is my_rank * dim
and so...
17 matches
Mail list logo