[OMPI devel] Incomplete MPI-1 removal

2019-04-27 Thread Lisandro Dalcin via devel
All the symbols below are legacy MPI-1 stuff, then should go away, marked as deprecated, etc. $ grep "MPI_COMBINER_[HS].*_INTEGER" /home/devel/mpi/openmpi/4.0.1/include/mpi.h MPI_COMBINER_HVECTOR_INTEGER, MPI_COMBINER_HINDEXED_INTEGER, MPI_COMBINER_STRUCT_INTEGER, -- Lisan

[OMPI devel] 2.0.0rc3 MPI_Comm_split_type()

2016-06-16 Thread Lisandro Dalcin
ception: MPI_ERR_ARG: invalid argument of some other kind -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.

[OMPI devel] MPI_Group_intersection: malloc(0) warning with 2.0.0rc3

2016-06-16 Thread Lisandro Dalcin
debug: Request for 0 bytes (group/group.c, 456) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/

[OMPI devel] Issue with 2.0.0rc3, singleton init

2016-06-16 Thread Lisandro Dalcin
6== The main thread stack size used in this run was 8720384. Killed -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://

[OMPI devel] C type of MPI_UNWEIGHTED and MPI_WEIGHTS_EMPTY

2016-03-13 Thread Lisandro Dalcin
) /* unweighted graph */ #define MPI_WEIGHTS_EMPTY((int *) 3) /* empty weights */ PS: While the current definition is kind of harmless for C, it is likely wrong for C++. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences

Re: [OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Lisandro Dalcin
dded some additional comments. PS: Should I go to GitHub directly next time? Or you still prefer bug reports in the mailing list? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdull

[OMPI devel] Failure calling MPI_Type_set_attr(datatype, keyval, NULL)

2016-02-11 Thread Lisandro Dalcin
Despite working for communicators and windows, setting a NULL attribute value in datatypes fails with MPI_ERR_ARG. Run the attached test case to reproduce. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Po

[OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Lisandro Dalcin
valid argument of some other kind [kw2060:18815] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [kw2060:18815] ***and potentially your MPI job) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) N

Re: [OMPI devel] malloc(0) warnings in post/wait and start/complete calls with GROUP_EMPTY

2016-02-08 Thread Lisandro Dalcin
x27;t complain. > btw, how do you get these warnings automatically ? > ./configure --enable-debug --enable-mem-debug ... PS: Running a trivial program with MPI_Init()/Finalize() shows a few memory leaks if run under valgrind. Just FYI, in case you want to take a closer look. -- Lisandro

[OMPI devel] malloc(0) warnings in post/wait and start/complete calls with GROUP_EMPTY

2016-02-01 Thread Lisandro Dalcin
(osc_pt2pt_active_target.c, 76) malloc debug: Request for 0 bytes (osc_pt2pt_active_target.c, 78) malloc debug: Request for 0 bytes (osc_pt2pt_active_target.c, 76) malloc debug: Request for 0 bytes (osc_pt2pt_active_target.c, 78) -- Lisandro Dalcin Research Scientist Computer, Electrical and

Re: [OMPI devel] Issues with nonblocking collectives for zero-sized messages

2015-11-10 Thread Lisandro Dalcin
On 10 November 2015 at 11:35, Nick Papior wrote: > Please try the patch from this post: > http://www.open-mpi.org/community/lists/users/2015/11/28030.php > It worked. Thanks. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engin

[OMPI devel] Issues with nonblocking collectives for zero-sized messages

2015-11-10 Thread Lisandro Dalcin
in this communicator will now abort, [kw2060:25212] ***and potentially your MPI job) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science

Re: [OMPI devel] Build regression: VampirTrace libraries built with debug symbols and no optimization

2015-08-10 Thread Lisandro Dalcin
On 20 June 2015 at 20:48, Bert Wesarg wrote: > Lisandro, > > On 06/20/2015 05:03 AM, Lisandro Dalcin wrote: >> >> Open MPI 1.8.6 was released, and this issue seems to be still there. >> Linux binaries are near 3 times larger: >> https://binstar.org/mpi4py/openmp

Re: [OMPI devel] malloc(0) warning with 1.8.7

2015-08-06 Thread Lisandro Dalcin
es. All warnings were silenced. Thanks! As I understand a 1.8.8 tarball is going to be released, It would be nice to add this fix to it. I'm attaching a patch from 1.8.7, it is basically your commit diff ignoring white-space changes and reverting mca_coll_base_module_2_1_0_t -> mca_coll_base_m

[OMPI devel] 1.8.7 release tarball versus v1.8.7 tag in ompi-release repo

2015-07-24 Thread Lisandro Dalcin
/oob_tcp_connection.c and openmpi-1.8.7/orte/mca/oob/tcp/oob_tcp_connection.c differ Files ompi-release/VERSION and openmpi-1.8.7/VERSION differ -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (Nu

[OMPI devel] malloc(0) warning with 1.8.7

2015-07-24 Thread Lisandro Dalcin
: Request for 0 bytes (coll_libnbc_ireduce_scatter_block.c, 67) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor

Re: [OMPI devel] Bug

2015-06-23 Thread Lisandro Dalcin
ly you get a failure that is not Open MPI's fault. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.sa/

Re: [OMPI devel] Bug

2015-06-22 Thread Lisandro Dalcin
so, my testsuite tests for corner cases and "stupid" code paths (e.g. create and destroy something without ever using it for anything useful) that evidently no one out there is testing. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences &

[OMPI devel] Regressions: MPI_Win_{start|post}() with MPI_GROUP_EMPTY

2015-06-22 Thread Lisandro Dalcin
The attached test code used to work in 1.8.5 and below, but they are failing in 1.8.6 with MPI_ERR_INTERN (tested in OS X). -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor)

Re: [OMPI devel] Bug

2015-06-22 Thread Lisandro Dalcin
1.8.6 silenced the warnings? IIRC, I reported other problems with (i?)reduce-scatter, but these where fixed in 1.8.5 -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor)

[OMPI devel] Bug

2015-06-20 Thread Lisandro Dalcin
w abort, [kl-13999:50786] ***and potentially your MPI job) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust

Re: [OMPI devel] Build regression: VampirTrace libraries built with debug symbols and no optimization

2015-06-19 Thread Lisandro Dalcin
Open MPI 1.8.6 was released, and this issue seems to be still there. Linux binaries are near 3 times larger: https://binstar.org/mpi4py/openmpi/files On 8 May 2015 at 06:47, Lisandro Dalcin wrote: > A build of 1.8.4 with just "./configure --prefix=..." produces the > following V

[OMPI devel] Build regression: VampirTrace libraries built with debug symbols and no optimization

2015-05-08 Thread Lisandro Dalcin
ntrib/vt looks ok, but the all the others in subdirs under ompi/contrib/vt are not, this smells as a build regression your are likely not aware of. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media

[OMPI devel] Issues with MPI_Type_create_f90_{real|complex}

2015-05-07 Thread Lisandro Dalcin
by 0x4008BA: main (in /home/dalcinl/Devel/BUGS-MPI/openmpi/a.out) ==1025== -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technolog

[OMPI devel] Warnings about malloc(0) in debug build

2015-05-07 Thread Lisandro Dalcin
for 0 bytes (coll_libnbc_ireduce_scatter_block.c, 67) malloc debug: Request for 0 bytes (nbc_internal.h, 505) malloc debug: Request for 0 bytes (osc_rdma_active_target.c, 74) malloc debug: Request for 0 bytes (osc_rdma_active_target.c, 76) -- Lisandro Dalcin Research Scientist Com

Re: [OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-12-23 Thread Lisandro Dalcin
ted: 0 $ mpiexec -n 1 ./a.out [0] rbuf[0]= 0 expected: 1 The last one is wrong. Not sure what's going on. Am I missing something? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPo

Re: [OMPI devel] Neighbor collectives with periodic Cartesian topologies of size one

2014-09-28 Thread Lisandro Dalcin
On 25 September 2014 20:50, Nathan Hjelm wrote: > On Tue, Aug 26, 2014 at 07:03:24PM +0300, Lisandro Dalcin wrote: >> I finally managed to track down some issues in mpi4py's test suite >> using Open MPI 1.8+. The code below should be enough to reproduce the >> problem

Re: [OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-09-28 Thread Lisandro Dalcin
d: 1 $ mpicc -DNBCOLL=1 ireduce_scatter.c && mpiexec -n 1 ./a.out [0] rbuf[0]=60 expected: 1 -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Sci

[OMPI devel] Valgrind warning in MPI_Win_allocate[_shared]()

2014-09-28 Thread Lisandro Dalcin
goto error; } if (blocking_fence) { -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.e

Re: [OMPI devel] malloc 0 warnings

2014-08-27 Thread Lisandro Dalcin
On 27 August 2014 02:38, Jeff Squyres (jsquyres) wrote: > If you have reproducers, yes, that would be most helpful -- thanks. > Here you have another one... $ cat igatherv.c #include int main(int argc, char *argv[]) { signed char a=1,b=2; int rcounts[1] = {0}; int rdispls[1] = {0}; MPI_

Re: [OMPI devel] malloc 0 warnings

2014-08-27 Thread Lisandro Dalcin
On 27 August 2014 02:38, Jeff Squyres (jsquyres) wrote: > If you have reproducers, yes, that would be most helpful -- thanks. > OK, here you have something to start. To be fair, this is a reduction with zero count. I have many other tests for reductions with zero count that are failing. Does Ope

Re: [OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-27 Thread Lisandro Dalcin
e error message is from libtoolize about a file missing from the libtool > installation directory. > So, this looks (to me) like a mis-installation of libtool. > Of course, after $ sudo yum install libtool-ltdl-devel in my Fedora 20 box, everything went fine. Sorry for the noise.

Re: [OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-27 Thread Lisandro Dalcin
s already bad habit, > which is rightfully punished by Open MPI. > After much thinking about it, I must surrender :-), you were right. Sorry for the noise. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical

Re: [OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-26 Thread Lisandro Dalcin
pecifically at MPI_Finalize(). Caching duplicated communicators is a key feature in many libraries. How do you propose to handle the deallocation of the duped communicators when COMM_WORLD is involved? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sc

Re: [OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-26 Thread Lisandro Dalcin
ould it be related to automake 13 instead of 12 ? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.

[OMPI devel] malloc 0 warnings

2014-08-26 Thread Lisandro Dalcin
debug: Request for 0 bytes (osc_rdma_active_target.c, 74) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kau

[OMPI devel] Neighbor collectives with periodic Cartesian topologies of size one

2014-08-26 Thread Lisandro Dalcin
;argc, &argv); MPI_Cart_create(MPI_COMM_SELF, ndims, dims, periods, 0, &comm); MPI_Neighbor_allgather(&sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, comm); {int i; for (i=0;i<5;i++) printf("%d ",recvbuf[i]); printf(&quo

[OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-26 Thread Lisandro Dalcin
ompleted successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdu

[OMPI devel] Comm_split_type(COMM_SELF, MPI_UNDEFINED, ...)

2014-08-26 Thread Lisandro Dalcin
SELF [kw2060:9865] *** MPI_ERR_ARG: invalid argument of some other kind [kw2060:9865] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [kw2060:9865] ***and potentially your MPI job) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Scien

[OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-26 Thread Lisandro Dalcin
;, ni, na, nd, combiner); MPI_Type_free(&datatype); MPI_Finalize(); return 0; } $ mpicc type_hindexed_block.c $ ./a.out ni=7 na=5 nd=1 combiner=18 -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Por

[OMPI devel] Patch to fix valgrind warning

2014-04-24 Thread Lisandro Dalcin
.so.1.0) ==19533==by 0x38442F2357: ??? (in /usr/lib64/libpython2.7.so.1.0) ==19533==by 0x38442F2FF0: ??? (in /usr/lib64/libpython2.7.so.1.0) ==19533==by 0x38442F323C: ??? (in /usr/lib64/libpython2.7.so.1.0) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe

[OMPI devel] MPI_Comm_create_group()

2014-04-21 Thread Lisandro Dalcin
/Devel/BUGS-MPI/openmpi/a.out) ==22675== -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-04-21 Thread Lisandro Dalcin
I'm not sure this is actually a bug, but the difference may surprise users. It seems that the implementation of MPI_Ireduce_scatter(MPI_IN_PLACE,...) (ab?)uses the recvbuf to compute the intermediate reduction, while MPI_Reduce_scatter(MPI_IN_PLACE,...) does not. Look at the following code (setup

[OMPI devel] Issues with MPI_Add_error_class()

2014-04-21 Thread Lisandro Dalcin
s in this communicator will now abort, [kw2060:20883] *** and potentially your MPI job) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Type_create_hindexed_block() segfaults

2014-04-21 Thread Lisandro Dalcin
./a.out[0x40080c] [kw2060:20304] [ 5] /lib64/libc.so.6(__libc_start_main+0xf5)[0x327bc21d65] [kw2060:20304] [ 6] ./a.out[0x4006f9] [kw2060:20304] *** End of error message *** Segmentation fault (core dumped) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colect

[OMPI devel] Win_fence() with assertion=MPI_MODE_NOPRECEDE|MPI_MODE_NOSUCCEED

2014-04-21 Thread Lisandro Dalcin
esses in this win will now abort, [kw2060:19890] ***and potentially your MPI job) [dalcinl@kw2060 openmpi]$ -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] querying Op commutativity for predefined reduction operations.

2014-04-21 Thread Lisandro Dalcin
239272148992] [kw2060:19303] *** on communicator MPI_COMM_WORLD [kw2060:19303] *** MPI_ERR_OP: invalid reduce operation [kw2060:19303] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [kw2060:19303] ***and potentially your MPI job) -- Lisandro Dalcin --- CIMEC

[OMPI devel] Missing error strings for MPI_ERR_RMA_XXX error classes

2014-04-10 Thread Lisandro Dalcin
RR_RMA_SHARED The comment is wrong, the last predefined error class is MPI_ERR_RMA_SHARED and not MPI_ERR_RMA_FLAVOR. -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel

[OMPI devel] Missing MPI 3 definitions

2014-03-27 Thread Lisandro Dalcin
could simply ignore the info handle, and the second could just return a brand new empty info handle (well, unless you implemented MPI_Comm_dup_with_info() to actually use the info hints). -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El

[OMPI devel] MPI_Is_thread_main() with provided=MPI_THREAD_SERIALIZED

2013-09-04 Thread Lisandro Dalcin
r/src/0a159982d7204d4b4b9fa61771d0fc7e9dc16771/ompi/mpi/c/is_thread_main.c?at=default#cl-50 -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Mrecv(..., MPI_STATUS_IGNORE) in Open MPI 1.7.1

2013-05-01 Thread Lisandro Dalcin
code: Address not mapped (1) [localhost:17489] Failing at address: (nil) ... -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] Barrier() after Finalize() when a file handle is leaked.

2010-09-15 Thread Lisandro Dalcin
f("atexitmpi: finalized=%d\n", flag); MPI_Barrier(MPI_COMM_WORLD); } int main(int argc, char *argv[]) { int keyval = MPI_KEYVAL_INVALID; MPI_Init(&argc, &argv); MPI_Comm_create_keyval(MPI_COMM_NULL_COPY_FN, atexitmpi, &keyval, 0); MPI_Comm_set_attr(MPI_COMM_SELF, ke

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-13 Thread Lisandro Dalcin
On 13 August 2010 05:22, Matthias Jurenz wrote: > On Wednesday 11 August 2010 23:16:50 Lisandro Dalcin wrote: >> On 11 August 2010 03:12, Matthias Jurenz > wrote: >> > Hello Lisandro, >> > >> > this problem will be fixed in the next Open MPI release.

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-11 Thread Lisandro Dalcin
able U pomp_rd_table U pomp_rd_table U pomp_rd_table U pomp_rd_table That symbol (and possibly others) are undefined and I cannot found them elsewhere. Is there any easy way to build a shared lib with the MPI_xxx symbols? -- Lisandro Dalcin ---

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-10 Thread Lisandro Dalcin
to appear, but it is not the case. Many thanks, -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-10 Thread Lisandro Dalcin
#x27;m just reporting this issue (related to a mpi4py bug report that arrived at my inbox months ago). -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Type_free(MPI_BYTE) not failing after MPI_Win_create()

2010-06-18 Thread Lisandro Dalcin
MPI_Win_free(&win); } #endif { MPI_Datatype byte = MPI_BYTE; MPI_Type_free(&byte); } MPI_Finalize(); return 0; } -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] malloc(0) warnings

2010-05-05 Thread Lisandro Dalcin
bytes (coll_inter_scatterv.c, 82) -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

Re: [OMPI devel] RFC: ABI break between 1.4 and 1.5 / .so versioning

2010-02-19 Thread Lisandro Dalcin
trick, just in case a sysadmin desperately needs the hack because of pressure from some user with ABI issues. -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacio

Re: [OMPI devel] failure withzero-lengthReduce()andbothsbuf=rbuf=NULL

2010-02-11 Thread Lisandro Dalcin
e end, I agree that representing zero-length arrays with (pointer=NULL,length=0) should be regarded as bad practice... -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC

[OMPI devel] MPI_Win_get_errhandler() and MPI_Win_set_errhandler() do not fail when passing MPI_WIN_NULL

2010-02-11 Thread Lisandro Dalcin
I've reported this long ago (alongside other issues now fixed)... I can see that this is fixed in trunk and branches/v1.5, but not backported to branches/v1.4 Any chance to get this for 1.4.2? Or should it wait until 1.5? -- Lisandro Dalcin --- Centro Internacional de Mé

Re: [OMPI devel] failure withzero-lengthReduce()andbothsbuf=rbuf=NULL

2010-02-11 Thread Lisandro Dalcin
n Windows, Linux and OS X, with many of the MPI-1 and MPI-2 implementations out there... Consistent behavior and standard compliance on MPI implementations is FUNDAMENTAL to develop portable wrappers for other languages... Unfortunately, things are not so easy; mpi4py's source code and testsuite i

[OMPI devel] Request_free() and Cancel() with REQUEST_NULL

2010-02-11 Thread Lisandro Dalcin
_Cancel(&req); MPI_Finalize(); return 0; } PS: The code below was tested with 1.4.1 -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Cientí

Re: [OMPI devel] failure with zero-length Reduce()andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
On 10 February 2010 14:19, Jeff Squyres wrote: > On Feb 10, 2010, at 11:59 AM, Lisandro Dalcin wrote: > >> > If I remember correctly, the HPCC pingpong test synchronizes occasionally >> > by >> > having one process send a zero-byte broadcast to all other pr

Re: [OMPI devel] failure with zero-length Reduce() andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
t the synchronization they were looking for. > Or use MPI_Barrier() ... -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas

Re: [OMPI devel] failure with zero-length Reduce() andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
could be non-NULL and always different (i.e. what malloc(0) returns in some platforms), or pointer could be NULL (because that's what malloc(0) returns, of because the implemention code special-case things by enforcing ptr=NULL,len=0 for zero-length array instances). As there are different ways

Re: [OMPI devel] failure with zero-length Reduce() and both sbuf=rbuf=NULL

2009-12-11 Thread Lisandro Dalcin
On Thu, Dec 10, 2009 at 4:26 PM, George Bosilca wrote: > Lisandro, > > This code is not correct from the MPI standard perspective. The reason is > independent of the datatype or count, it is solely related to the fact that > the MPI_Reduce cannot accept a sendbuf equal to the recvbuf (or one has

[OMPI devel] failure with zero-length Reduce() and both sbuf=rbuf=NULL

2009-12-10 Thread Lisandro Dalcin
See the code below. The commented-out combinations for sbuf,rbuf do work, but the one passing sbuf=rbuf=NULL (i.e, the uncommented one show below) makes the call fail with MPI_ERR_ARG. #include int main( int argc, char ** argv ) { int ierr; int sbuf,rbuf; MPI_Init(&argc, &argv); ierr = M

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-12-09 Thread Lisandro Dalcin
It seems that this issue got lost. On Thu, Feb 12, 2009 at 9:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: > >> Below a list of stuff that I've got by running mpi4py testsuite. >> >> 4)  When passing MPI_WIN_NULL,

[OMPI devel] MPI_Group_{incl|exc} with nranks=0 and ranks=NULL

2009-10-21 Thread Lisandro Dalcin
Currently (trunk, just svn update'd), the following call fails (because of the ranks=NULL pointer) MPI_Group_{incl|excl}(group, 0, NULL, &newgroup) BTW, MPI_Group_translate_ranks() has similar issues... Provided that Open MPI accept the combination (int_array_size=0, int_array_ptr=NULL) in othe

Re: [OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-26 Thread Lisandro Dalcin
e0 D ompi_mpi_real2 So if you have support for real(kind=2) in "ompi_mpi_real2" ... Do you still think that it is so hard to support complex(kind=4) ?? Anyway, I see that MPI_REAL2 is never #define'd to &ompi_mpi_real2 . >  george. > > On Sep 26, 2009, at 11:04 , L

Re: [OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-26 Thread Lisandro Dalcin
support them) are an omission in the 2.2 standard. On Wed, Sep 23, 2009 at 4:33 PM, Lisandro Dalcin wrote: > Disclaimer: I have almost no experience with Fortran, nor I'm needing > this, but anyway (perhaps just as a reminder for you) :-)... > > Provided that: > > 1) Open

[OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-23 Thread Lisandro Dalcin
Disclaimer: I have almost no experience with Fortran, nor I'm needing this, but anyway (perhaps just as a reminder for you) :-)... Provided that: 1) Open MPI exposes MPI_LOGICAL{1|2|4|8}, and they are not (AFAIK) listed in the MPI standard (I cannot found them in MPI-2.2) 2) The MPI-2.2 standard

Re: [OMPI devel] Dynamic languages, dlopen() issues, and symbol visibility of libtool ltdl API in current trunk

2009-09-22 Thread Lisandro Dalcin
btool.patches/9446 >> >> So we would (others can speak up if not) certainly consider such a >> wrapper, but I think we need to wait for the next libtool release >> (unless there is other magic we can do) before it would be usable. >> >> Do others have any

[OMPI devel] Dynamic languages, dlopen() issues, and symbol visibility of libtool ltdl API in current trunk

2009-09-16 Thread Lisandro Dalcin
Hi all.. I have to contact you again about the issues related to dlopen()ing libmpi with RTLD_LOCAL, as many dynamic languages (Python in my case) do. So far, I've been able to manage the issues (despite the "do nothing" policy from Open MPI devs, which I understand) in a more or less portable man

[OMPI devel] more bug/comments for current trunk

2009-09-02 Thread Lisandro Dalcin
Disclaimer: this is for trunk svn up'ed yesterday. The code below should fail with ERR_COMM, but it succeed... #include int main(int argc, char **argv) { int *value, flag; MPI_Init(NULL, NULL); MPI_Comm_get_attr(MPI_COMM_NULL, MPI_TAG_UB, &value, &flag); MPI_Finalize(); return 0; } A

[OMPI devel] Cannot Free() a datatype created with Dup() or Create_resized()

2009-08-31 Thread Lisandro Dalcin
In current ompi-trunk (svn up'ed and built a few minutes ago), a Free() from a datatype obtained with Dup() or Create_resized() from a predefined datatype is failing with ERR_TYPE... Is this change intentional or is it a regression? $ cat typedup.py from mpi4py import MPI t = MPI.INT.Dup() t.Fre

[OMPI devel] MPI_Accumulate() with MPI_PROC_NULL target rank

2009-07-15 Thread Lisandro Dalcin
The MPI 2-1 standard says: "MPI_PROC_NULL is a valid target rank in the MPI RMA calls MPI_ACCUMULATE, MPI_GET, and MPI_PUT. The effect is the same as for MPI_PROC_NULL in MPI point-to-point communication. After any RMA operation with rank MPI_PROC_NULL, it is still necessary to finish the RMA epoc

[OMPI devel] some comments on attribute catching, create/free() keyvals and all that.

2009-03-13 Thread Lisandro Dalcin
e(&tmp2); MPI_Finalize(); printf("MPI_KEYVAL_INVALID: %d\n", MPI_KEYVAL_INVALID); printf("Key1: %d\n", Key1); printf("tmp1: %d\n", tmp1); printf("Key2: %d\n", Key2); printf("tmp2: %d\n", tmp2); return 0; } -- Forwarded message

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-19 Thread Lisandro Dalcin
On Thu, Feb 19, 2009 at 10:54 AM, Jeff Squyres wrote: > On Feb 16, 2009, at 9:14 AM, Lisandro Dalcin wrote: > >> After running my testsuite again and next looking at >> "ompi/mpi/c/comm_set_errhandler.c", I noticed that >> MPI_Comm_set_errhandler() d

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-16 Thread Lisandro Dalcin
Just found something new to comment after diving into the actual sources On Thu, Feb 12, 2009 at 10:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: >> >> 1) When passing MPI_COMM_NULL, MPI_Comm_get_errhandler() fails with >> MPI_ER

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-16 Thread Lisandro Dalcin
On Thu, Feb 12, 2009 at 10:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: > >> Below a list of stuff that I've got by running mpi4py testsuite. Never >> reported them before just because some of them are not actually >> errors, bu

[OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-11 Thread Lisandro Dalcin
Below a list of stuff that I've got by running mpi4py testsuite. Never reported them before just because some of them are not actually errors, but anyway, I want to raise the discussion. - Likely bugs (regarding my interpretation of the MPI standard) 1) When passing MPI_REQUEST_NULL, MPI_Request_

[OMPI devel] likely bad return from MPI_File_c2f

2009-02-10 Thread Lisandro Dalcin
Try to run the trivial program below. I MPI_File_c2f(MPI_FILE_NULL) returns "-1" (minus one), however it seems the routine should return "0" (zero). #include #include int main() { MPI_Fint i; MPI_File f; MPI_Init(0,0); i = MPI_File_c2f(MPI_FILE_NULL); printf("MPI_File_c2f(MPI_FILE_NULL

[OMPI devel] some possible bugs after trying 1.2.6

2008-04-14 Thread Lisandro Dalcin
Hi all, I've just downloaded and installed release 1.2.6. Additionally, I'm reimplementing from scratch my Python wrappers for MPI using some more advanded tools than manual C coding. Now, I do not try in any way of doing argument checking as I did before. Then I've ran al my unittest machinger. An

[OMPI devel] valgrind warnings (uninited mem passed to syscall)

2007-12-17 Thread Lisandro Dalcin
Dear all, I'm getting valgrind warnings related to syscalls with uninitialized memory (with release 1.2.4). Before providing more details and code reproducing the problem, I would like to know if there is any configure option I should take care of which enables extra memory initialization (--enab

[OMPI devel] MPI_GROUP_EMPTY and MPI_Group_free()

2007-12-04 Thread Lisandro Dalcin
Dear all, As I see some activity on a related ticked, below some comments I sended to Bill Gropp some days ago about this subject. Bill did not write me back, I know he is really busy. Group operations are supposed to return new groups, so the used has to free the result. Additionally, the standa

Re: [OMPI devel] [OMPI users] Possible Memcpy bug in MPI_Comm_split

2007-08-17 Thread Lisandro Dalcin
On 8/16/07, George Bosilca wrote: > Well, finally someone discovered it :) I know about this problem for > quite a while now, it pop up during our own valgrind test of the > collective module in Open MPI. However, it never create any problems > in the applications, at least not as far as I know. T

Re: [OMPI devel] MPI_Win_get_group

2007-08-07 Thread Lisandro Dalcin
On 8/1/07, Jeff Squyres wrote: > BTW, I totally forgot to mention a notable C++ MPI bindings project > that is the next-generation/successor to OMPI: the Boost C++ MPI > bindings (boost.mpi). > > http://www.generic-programming.org/~dgregor/boost.mpi/doc/ > > I believe there's also python bind

Re: [OMPI devel] MPI_Win_get_group

2007-08-07 Thread Lisandro Dalcin
On 8/6/07, Jeff Squyres wrote: > On Aug 6, 2007, at 2:42 PM, Lisandro Dalcin wrote: > > Because many predefined, intrinsic objects cannot (or should not be > > able to) be freed, acording to the standard. > > I understand that. :-) But why would you call XXX.Free() on an &g

Re: [OMPI devel] MPI_Win_get_group

2007-08-06 Thread Lisandro Dalcin
On 8/1/07, Jeff Squyres wrote: > On Jul 31, 2007, at 6:43 PM, Lisandro Dalcin wrote: >> having to call XXX.Free() for every > > object i get from a call like XXX.Get_something() is really an > > unnecesary pain. > > Gotcha. > > But I don't see why this

Re: [OMPI devel] MPI_Win_get_group

2007-07-31 Thread Lisandro Dalcin
On 7/31/07, Jeff Squyres wrote: > Just curious -- why do you need to know if a handle refers to a > predefined object? If I understand correctly, new handles shoud be freed in order to do not leak things, to follow good programming practices, and being completelly sure a valgrind run do not repor

Re: [OMPI devel] MPI_Win_get_group

2007-07-31 Thread Lisandro Dalcin
On 7/31/07, Dries Kimpe wrote: > The MPI_File_get_view description in the standard has some issues related > to copies and named datatypes: > > see > http://www-unix.mcs.anl.gov/~gropp/projects/parallel/MPI/mpi-errata/discuss/fileview/fileview-1-clean.txt Indeed, your comment was exactly the sour

Re: [OMPI devel] MPI_Win_get_group

2007-07-30 Thread Lisandro Dalcin
On 7/30/07, George Bosilca wrote: > In the data-type section there is an advice to implementors that > state that a copy can simply increase the reference count if > applicable. So, we might want to apply the same logic here ... BTW, you just mentioned other obscure case. Do this apply to NAMED d

[OMPI devel] looking up service

2007-07-30 Thread Lisandro Dalcin
MPI_Lookup_name() is supposed to work on v1.2 branch? I cannot get it working (it fails with MPI_ERR_NAME). -- Lisandro Dalcín --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacion

Re: [OMPI devel] MPI_Win_get_group

2007-07-30 Thread Lisandro Dalcin
On 7/29/07, Jeff Squyres wrote: > On Jul 28, 2007, at 4:41 PM, Lisandro Dalcin wrote: > > > In the mean time, I would prefer to follow the standard as close as > > possible. If not, some external, stupid test suite (like the one I > > have for mip4py) would report that

[OMPI devel] freeing GROUP_EMPTY

2007-07-28 Thread Lisandro Dalcin
A simple test trying to free GROUP_EMPTY failed with the following trace. a.out: ../opal/class/opal_object.h:403: opal_obj_run_destructors: Assertion `((void *)0) != object->obj_class' failed. [trantor:19821] *** Process received signal *** [trantor:19821] Signal: Aborted (6) [trantor:19821] Signa

[OMPI devel] MPI_Comm_free with MPI_COMM_SELF

2007-07-28 Thread Lisandro Dalcin
I tried to free COMM_SELF, and it seems to call the error handler attached to COMM_WORLD. Is this intended? Should'nt OMPI use the error handler to COMM_SELF? As reference, I tried this with MPICH2, and of course the call fails, but using the error handler in COMM_SELF. Again, this is a new corne

Re: [OMPI devel] MPI_Win_get_group

2007-07-28 Thread Lisandro Dalcin
On 7/28/07, Brian Barrett wrote: > In my opinion, we conform to the standard. We reference count the > group, it's incremented on call to MPI_WIN_GROUP, and you can safely > call MPI_GROUP_FREE on the group returned from MPI_WIN_GROUP. Groups > are essentially immutable, so there's no way I can

[OMPI devel] MPI_Win_get_group

2007-07-27 Thread Lisandro Dalcin
The MPI-2 standard says (see bottom of ) MPI_WIN_GET_GROUP returns a duplicate of the group of the communicator used to create the window. associated with win. The group is returned in group. Pease, note the 'duplicate' ... Well, it

  1   2   >