[OMPI devel] MPI_Mrecv(..., MPI_STATUS_IGNORE) in Open MPI 1.7.1

2013-05-01 Thread Lisandro Dalcin
code: Address not mapped (1) [localhost:17489] Failing at address: (nil) ... -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Is_thread_main() with provided=MPI_THREAD_SERIALIZED

2013-09-04 Thread Lisandro Dalcin
r/src/0a159982d7204d4b4b9fa61771d0fc7e9dc16771/ompi/mpi/c/is_thread_main.c?at=default#cl-50 -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] Missing MPI 3 definitions

2014-03-27 Thread Lisandro Dalcin
could simply ignore the info handle, and the second could just return a brand new empty info handle (well, unless you implemented MPI_Comm_dup_with_info() to actually use the info hints). -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El

[OMPI devel] Missing error strings for MPI_ERR_RMA_XXX error classes

2014-04-10 Thread Lisandro Dalcin
RR_RMA_SHARED The comment is wrong, the last predefined error class is MPI_ERR_RMA_SHARED and not MPI_ERR_RMA_FLAVOR. -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel

[OMPI devel] querying Op commutativity for predefined reduction operations.

2014-04-21 Thread Lisandro Dalcin
239272148992] [kw2060:19303] *** on communicator MPI_COMM_WORLD [kw2060:19303] *** MPI_ERR_OP: invalid reduce operation [kw2060:19303] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [kw2060:19303] ***and potentially your MPI job) -- Lisandro Dalcin --- CIMEC

[OMPI devel] Win_fence() with assertion=MPI_MODE_NOPRECEDE|MPI_MODE_NOSUCCEED

2014-04-21 Thread Lisandro Dalcin
esses in this win will now abort, [kw2060:19890] ***and potentially your MPI job) [dalcinl@kw2060 openmpi]$ -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Type_create_hindexed_block() segfaults

2014-04-21 Thread Lisandro Dalcin
./a.out[0x40080c] [kw2060:20304] [ 5] /lib64/libc.so.6(__libc_start_main+0xf5)[0x327bc21d65] [kw2060:20304] [ 6] ./a.out[0x4006f9] [kw2060:20304] *** End of error message *** Segmentation fault (core dumped) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colect

[OMPI devel] Issues with MPI_Add_error_class()

2014-04-21 Thread Lisandro Dalcin
s in this communicator will now abort, [kw2060:20883] *** and potentially your MPI job) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-04-21 Thread Lisandro Dalcin
I'm not sure this is actually a bug, but the difference may surprise users. It seems that the implementation of MPI_Ireduce_scatter(MPI_IN_PLACE,...) (ab?)uses the recvbuf to compute the intermediate reduction, while MPI_Reduce_scatter(MPI_IN_PLACE,...) does not. Look at the following code (setup

[OMPI devel] MPI_Comm_create_group()

2014-04-21 Thread Lisandro Dalcin
/Devel/BUGS-MPI/openmpi/a.out) ==22675== -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo 3000 Santa Fe, Argentina Tel: +54-342-4511594 (ext 1016) Tel/Fax: +54-342-4511169

[OMPI devel] Patch to fix valgrind warning

2014-04-24 Thread Lisandro Dalcin
.so.1.0) ==19533==by 0x38442F2357: ??? (in /usr/lib64/libpython2.7.so.1.0) ==19533==by 0x38442F2FF0: ??? (in /usr/lib64/libpython2.7.so.1.0) ==19533==by 0x38442F323C: ??? (in /usr/lib64/libpython2.7.so.1.0) -- Lisandro Dalcin --- CIMEC (UNL/CONICET) Predio CONICET-Santa Fe

[OMPI devel] likely bad return from MPI_File_c2f

2009-02-10 Thread Lisandro Dalcin
Try to run the trivial program below. I MPI_File_c2f(MPI_FILE_NULL) returns "-1" (minus one), however it seems the routine should return "0" (zero). #include #include int main() { MPI_Fint i; MPI_File f; MPI_Init(0,0); i = MPI_File_c2f(MPI_FILE_NULL); printf("MPI_File_c2f(MPI_FILE_NULL

[OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-11 Thread Lisandro Dalcin
Below a list of stuff that I've got by running mpi4py testsuite. Never reported them before just because some of them are not actually errors, but anyway, I want to raise the discussion. - Likely bugs (regarding my interpretation of the MPI standard) 1) When passing MPI_REQUEST_NULL, MPI_Request_

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-16 Thread Lisandro Dalcin
On Thu, Feb 12, 2009 at 10:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: > >> Below a list of stuff that I've got by running mpi4py testsuite. Never >> reported them before just because some of them are not actually >> errors, bu

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-16 Thread Lisandro Dalcin
Just found something new to comment after diving into the actual sources On Thu, Feb 12, 2009 at 10:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: >> >> 1) When passing MPI_COMM_NULL, MPI_Comm_get_errhandler() fails with >> MPI_ER

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-02-19 Thread Lisandro Dalcin
On Thu, Feb 19, 2009 at 10:54 AM, Jeff Squyres wrote: > On Feb 16, 2009, at 9:14 AM, Lisandro Dalcin wrote: > >> After running my testsuite again and next looking at >> "ompi/mpi/c/comm_set_errhandler.c", I noticed that >> MPI_Comm_set_errhandler() d

[OMPI devel] some comments on attribute catching, create/free() keyvals and all that.

2009-03-13 Thread Lisandro Dalcin
e(&tmp2); MPI_Finalize(); printf("MPI_KEYVAL_INVALID: %d\n", MPI_KEYVAL_INVALID); printf("Key1: %d\n", Key1); printf("tmp1: %d\n", tmp1); printf("Key2: %d\n", Key2); printf("tmp2: %d\n", tmp2); return 0; } -- Forwarded message

[OMPI devel] MPI_Accumulate() with MPI_PROC_NULL target rank

2009-07-15 Thread Lisandro Dalcin
The MPI 2-1 standard says: "MPI_PROC_NULL is a valid target rank in the MPI RMA calls MPI_ACCUMULATE, MPI_GET, and MPI_PUT. The effect is the same as for MPI_PROC_NULL in MPI point-to-point communication. After any RMA operation with rank MPI_PROC_NULL, it is still necessary to finish the RMA epoc

[OMPI devel] Cannot Free() a datatype created with Dup() or Create_resized()

2009-08-31 Thread Lisandro Dalcin
In current ompi-trunk (svn up'ed and built a few minutes ago), a Free() from a datatype obtained with Dup() or Create_resized() from a predefined datatype is failing with ERR_TYPE... Is this change intentional or is it a regression? $ cat typedup.py from mpi4py import MPI t = MPI.INT.Dup() t.Fre

[OMPI devel] more bug/comments for current trunk

2009-09-02 Thread Lisandro Dalcin
Disclaimer: this is for trunk svn up'ed yesterday. The code below should fail with ERR_COMM, but it succeed... #include int main(int argc, char **argv) { int *value, flag; MPI_Init(NULL, NULL); MPI_Comm_get_attr(MPI_COMM_NULL, MPI_TAG_UB, &value, &flag); MPI_Finalize(); return 0; } A

[OMPI devel] Dynamic languages, dlopen() issues, and symbol visibility of libtool ltdl API in current trunk

2009-09-16 Thread Lisandro Dalcin
Hi all.. I have to contact you again about the issues related to dlopen()ing libmpi with RTLD_LOCAL, as many dynamic languages (Python in my case) do. So far, I've been able to manage the issues (despite the "do nothing" policy from Open MPI devs, which I understand) in a more or less portable man

Re: [OMPI devel] Dynamic languages, dlopen() issues, and symbol visibility of libtool ltdl API in current trunk

2009-09-22 Thread Lisandro Dalcin
btool.patches/9446 >> >> So we would (others can speak up if not) certainly consider such a >> wrapper, but I think we need to wait for the next libtool release >> (unless there is other magic we can do) before it would be usable. >> >> Do others have any

[OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-23 Thread Lisandro Dalcin
Disclaimer: I have almost no experience with Fortran, nor I'm needing this, but anyway (perhaps just as a reminder for you) :-)... Provided that: 1) Open MPI exposes MPI_LOGICAL{1|2|4|8}, and they are not (AFAIK) listed in the MPI standard (I cannot found them in MPI-2.2) 2) The MPI-2.2 standard

Re: [OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-26 Thread Lisandro Dalcin
support them) are an omission in the 2.2 standard. On Wed, Sep 23, 2009 at 4:33 PM, Lisandro Dalcin wrote: > Disclaimer: I have almost no experience with Fortran, nor I'm needing > this, but anyway (perhaps just as a reminder for you) :-)... > > Provided that: > > 1) Open

Re: [OMPI devel] ompi-trunk: have MPI_REAL2 (if available) but missing MPI_COMPLEX4

2009-09-26 Thread Lisandro Dalcin
e0 D ompi_mpi_real2 So if you have support for real(kind=2) in "ompi_mpi_real2" ... Do you still think that it is so hard to support complex(kind=4) ?? Anyway, I see that MPI_REAL2 is never #define'd to &ompi_mpi_real2 . >  george. > > On Sep 26, 2009, at 11:04 , L

[OMPI devel] MPI_Group_{incl|exc} with nranks=0 and ranks=NULL

2009-10-21 Thread Lisandro Dalcin
Currently (trunk, just svn update'd), the following call fails (because of the ranks=NULL pointer) MPI_Group_{incl|excl}(group, 0, NULL, &newgroup) BTW, MPI_Group_translate_ranks() has similar issues... Provided that Open MPI accept the combination (int_array_size=0, int_array_ptr=NULL) in othe

Re: [OMPI devel] possible bugs and unexpected values in returned errors classes

2009-12-09 Thread Lisandro Dalcin
It seems that this issue got lost. On Thu, Feb 12, 2009 at 9:02 PM, Jeff Squyres wrote: > On Feb 11, 2009, at 8:24 AM, Lisandro Dalcin wrote: > >> Below a list of stuff that I've got by running mpi4py testsuite. >> >> 4)  When passing MPI_WIN_NULL,

[OMPI devel] failure with zero-length Reduce() and both sbuf=rbuf=NULL

2009-12-10 Thread Lisandro Dalcin
See the code below. The commented-out combinations for sbuf,rbuf do work, but the one passing sbuf=rbuf=NULL (i.e, the uncommented one show below) makes the call fail with MPI_ERR_ARG. #include int main( int argc, char ** argv ) { int ierr; int sbuf,rbuf; MPI_Init(&argc, &argv); ierr = M

Re: [OMPI devel] failure with zero-length Reduce() and both sbuf=rbuf=NULL

2009-12-11 Thread Lisandro Dalcin
On Thu, Dec 10, 2009 at 4:26 PM, George Bosilca wrote: > Lisandro, > > This code is not correct from the MPI standard perspective. The reason is > independent of the datatype or count, it is solely related to the fact that > the MPI_Reduce cannot accept a sendbuf equal to the recvbuf (or one has

Re: [OMPI devel] failure with zero-length Reduce() andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
could be non-NULL and always different (i.e. what malloc(0) returns in some platforms), or pointer could be NULL (because that's what malloc(0) returns, of because the implemention code special-case things by enforcing ptr=NULL,len=0 for zero-length array instances). As there are different ways

Re: [OMPI devel] failure with zero-length Reduce() andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
t the synchronization they were looking for. > Or use MPI_Barrier() ... -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas

Re: [OMPI devel] failure with zero-length Reduce()andbothsbuf=rbuf=NULL

2010-02-10 Thread Lisandro Dalcin
On 10 February 2010 14:19, Jeff Squyres wrote: > On Feb 10, 2010, at 11:59 AM, Lisandro Dalcin wrote: > >> > If I remember correctly, the HPCC pingpong test synchronizes occasionally >> > by >> > having one process send a zero-byte broadcast to all other pr

[OMPI devel] Request_free() and Cancel() with REQUEST_NULL

2010-02-11 Thread Lisandro Dalcin
_Cancel(&req); MPI_Finalize(); return 0; } PS: The code below was tested with 1.4.1 -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Cientí

Re: [OMPI devel] failure withzero-lengthReduce()andbothsbuf=rbuf=NULL

2010-02-11 Thread Lisandro Dalcin
n Windows, Linux and OS X, with many of the MPI-1 and MPI-2 implementations out there... Consistent behavior and standard compliance on MPI implementations is FUNDAMENTAL to develop portable wrappers for other languages... Unfortunately, things are not so easy; mpi4py's source code and testsuite i

[OMPI devel] MPI_Win_get_errhandler() and MPI_Win_set_errhandler() do not fail when passing MPI_WIN_NULL

2010-02-11 Thread Lisandro Dalcin
I've reported this long ago (alongside other issues now fixed)... I can see that this is fixed in trunk and branches/v1.5, but not backported to branches/v1.4 Any chance to get this for 1.4.2? Or should it wait until 1.5? -- Lisandro Dalcin --- Centro Internacional de Mé

Re: [OMPI devel] failure withzero-lengthReduce()andbothsbuf=rbuf=NULL

2010-02-11 Thread Lisandro Dalcin
e end, I agree that representing zero-length arrays with (pointer=NULL,length=0) should be regarded as bad practice... -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC

Re: [OMPI devel] RFC: ABI break between 1.4 and 1.5 / .so versioning

2010-02-19 Thread Lisandro Dalcin
trick, just in case a sysadmin desperately needs the hack because of pressure from some user with ABI issues. -- Lisandro Dalcin --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacio

[OMPI devel] malloc(0) warnings

2010-05-05 Thread Lisandro Dalcin
bytes (coll_inter_scatterv.c, 82) -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] MPI_Type_free(MPI_BYTE) not failing after MPI_Win_create()

2010-06-18 Thread Lisandro Dalcin
MPI_Win_free(&win); } #endif { MPI_Datatype byte = MPI_BYTE; MPI_Type_free(&byte); } MPI_Finalize(); return 0; } -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

[OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-10 Thread Lisandro Dalcin
#x27;m just reporting this issue (related to a mpi4py bug report that arrived at my inbox months ago). -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-10 Thread Lisandro Dalcin
to appear, but it is not the case. Many thanks, -- Lisandro Dalcin --- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-11 Thread Lisandro Dalcin
able U pomp_rd_table U pomp_rd_table U pomp_rd_table U pomp_rd_table That symbol (and possibly others) are undefined and I cannot found them elsewhere. Is there any easy way to build a shared lib with the MPI_xxx symbols? -- Lisandro Dalcin ---

Re: [OMPI devel] VampirTrace and MPI_Init_thread()

2010-08-13 Thread Lisandro Dalcin
On 13 August 2010 05:22, Matthias Jurenz wrote: > On Wednesday 11 August 2010 23:16:50 Lisandro Dalcin wrote: >> On 11 August 2010 03:12, Matthias Jurenz > wrote: >> > Hello Lisandro, >> > >> > this problem will be fixed in the next Open MPI release.

[OMPI devel] Barrier() after Finalize() when a file handle is leaked.

2010-09-15 Thread Lisandro Dalcin
f("atexitmpi: finalized=%d\n", flag); MPI_Barrier(MPI_COMM_WORLD); } int main(int argc, char *argv[]) { int keyval = MPI_KEYVAL_INVALID; MPI_Init(&argc, &argv); MPI_Comm_create_keyval(MPI_COMM_NULL_COPY_FN, atexitmpi, &keyval, 0); MPI_Comm_set_attr(MPI_COMM_SELF, ke

[OMPI devel] C type of MPI_UNWEIGHTED and MPI_WEIGHTS_EMPTY

2016-03-13 Thread Lisandro Dalcin
) /* unweighted graph */ #define MPI_WEIGHTS_EMPTY((int *) 3) /* empty weights */ PS: While the current definition is kind of harmless for C, it is likely wrong for C++. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences

[OMPI devel] Issue with 2.0.0rc3, singleton init

2016-06-16 Thread Lisandro Dalcin
6== The main thread stack size used in this run was 8720384. Killed -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://

[OMPI devel] MPI_Group_intersection: malloc(0) warning with 2.0.0rc3

2016-06-16 Thread Lisandro Dalcin
debug: Request for 0 bytes (group/group.c, 456) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/

[OMPI devel] 2.0.0rc3 MPI_Comm_split_type()

2016-06-16 Thread Lisandro Dalcin
ception: MPI_ERR_ARG: invalid argument of some other kind -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.

[OMPI devel] some possible bugs

2006-09-26 Thread Lisandro Dalcin
I'am developing mpi4py, a MPI port for Python. I've wrote many unittest scripts for my wrappers, which also pretend to test MPI implementations. Below, I list some issues I've found when building my wrappers with Open MPI 1.1.1. Please let me know your opinions. - MPI_Group_translate_ranks(group

[OMPI devel] Fwd: MPI_INPLACE problem

2006-09-27 Thread Lisandro Dalcin
Here an example of the problems I have with MPI_INPLACE in OMPI. Hoping this can be useful. Perhaps the problem is not in OMPI sources, but in my particular build. I've configured with: $ head -n 7 config.log | tail -n 1 $ ./configure --disable-dlopen --prefix /usr/local/openmpi/1.1.1 First I p

[OMPI devel] problem with MPI_[Pack|Unpack]_external

2006-09-29 Thread Lisandro Dalcin
I've just catched a problem with packing/unpacking using 'external32' in Linux. The problem seems to be word ordering, I believe you forgot to make the little-endian <-> big-endian conversion somewhere. Below, an interactive session with ipython (sorry, no time to write in C) showing the problem.

[OMPI devel] MPI_XXX_{get|set}_errhandler in general , and for files in particular

2006-10-09 Thread Lisandro Dalcin
Looking at MPI-2 errata document, http://www.mpi-forum.org/docs/errata-20-2.html, is says: Page 61, after line 36. Add the following (paralleling the errata to MPI-1.1): MPI_{COMM,WIN,FILE}_GET_ERRHANDLER behave as if a new error handler object is created. That is, once the error handler is no l

[OMPI devel] Something broken using Persistent Requests

2006-10-12 Thread Lisandro Dalcin
I am getting errors using persistent communications (OMPI 1.1.1). I am trying to implement (in Python) example 2.32 from page 107 of MPI- The Complete Reference (V1, 2nd. edition). I think the problem is not in my wrappers (my script works fine with MPICH2). Below the two issues: 1 - MPI_Startal

[OMPI devel] Fwd: MPI_GROUP_TRANSLATE_RANKS (again)

2006-10-19 Thread Lisandro Dalcin
I've successfully installed the just released 1.1.2. So I go for a new round catching bugs, non standard behavior, or just what could be seen as convenient features. The problem I've reported with MPI_GROUP_TRANSLATE_RANKS was corrected. However, looking at MPI-2 errata documment, it says: Add t

[OMPI devel] MPI_BUFFER_ATTACH/DETACH behaviour

2006-10-19 Thread Lisandro Dalcin
As a general idea and following similar MPI concepts, it can be really useful if MPI_BUFFER_ATTACH/DETACH allowed a layered usage, inside modules. That is, inside a call, a library can make a 'detach' and cache it, next 'attach' an internally allocated resource, call BSEND, 'detach' it own resourc

[OMPI devel] some stuff defined for Fortran but not for C

2006-10-20 Thread Lisandro Dalcin
in release 1.1.2, the following is included in 'mpif-config.h' parameter (OMPI_MAJOR_VERSION=1) parameter (OMPI_MINOR_VERSION=1) parameter (OMPI_RELEASE_VERSION=2) Any chance of having this accesible in C ? -- Lisandro Dalcín --- Centro Internacional de Métodos Comput

Re: [OMPI devel] [Open MPI] #529: MPI_START* returning OMPI_* error codes

2006-10-23 Thread Lisandro Dalcin
On 10/22/06, Open MPI wrote: #529: MPI_START* returning OMPI_* error codes -+-- Reporter: jsquyres | Owner: Type: defect| Status: new Priority: major | Milestone: Open MPI 1.1.3 Version: trun

[OMPI devel] Problems in Collectives+Intercomms

2006-11-06 Thread Lisandro Dalcin
A user testing my MPI wrappers for Python found a couple of problems with OMPI-1.1 using valgrind, here are his reports. http://projects.scipy.org/mpi4py/ticket/9 http://projects.scipy.org/mpi4py/ticket/10 I've investigated this at OMPI-1.1.2 sources, and found the following in file ompi/mpi/c/a

[OMPI devel] failures runing mpi4py testsuite, perhaps Comm.Split()

2007-07-11 Thread Lisandro Dalcin
Ups, sended to wrong list, forwarded here... -- Forwarded message -- From: Lisandro Dalcin List-Post: devel@lists.open-mpi.org Date: Jul 11, 2007 8:58 PM Subject: failures runing mpi4py testsuite, perhaps Comm.Split() To: Open MPI Hello all, after a long time I'm here

Re: [OMPI devel] failures runing mpi4py testsuite, perhaps Comm.Split()

2007-07-11 Thread Lisandro Dalcin
On 7/11/07, George Bosilca wrote: The two errors you provide are quite different. The first one has been addresses few days ago in the trunk (https://svn.open-mpi.org/ trac/ompi/changeset/15291). If instead of the 1.2.3 you use anything after r15291 you will be safe in a threading case. Please

[OMPI devel] COVERITY STATIC SOURCE CODE ANALYSIS

2007-07-19 Thread Lisandro Dalcin
Have any of you ever consider asking OpenMPI being included here, as it is an open source project? http://scan.coverity.com/index.html From many sources (mainly related to Python), it seems the results are impressive. Regards, -- Lisandro Dalcín --- Centro Internacional de Método

[OMPI devel] MPI_APPNUM value for apps not started through mpiexec

2007-07-23 Thread Lisandro Dalcin
Using a fresh (2 hours agoo) update of SVN branch v1.2, I've found that attribute MPI_APPNUM returns -1 (minus one) when an 'sequential' application is not launched through mpiexec. Reading the MPI standard, I understand it should return a non-negative integer if defined, or it should not be defi

[OMPI devel] MPI_ALLOC_MEM warning when requesting 0 (zero) bytes

2007-07-23 Thread Lisandro Dalcin
If I understand correctly the standard, http://www.mpi-forum.org/docs/mpi-20-html/node54.htm#Node54 MPI_ALLOC_MEM with size=0 is valid ('size' is a nonnegative integer) Then, using branch v1.2, I've got the following warning at runtime: malloc debug: Request for 0 bytes (base/mpool_base_alloc.c

Re: [OMPI devel] Fwd: [Open MPI] #1101: MPI_ALLOC_MEM with 0 size must be valid

2007-07-24 Thread Lisandro Dalcin
On 7/23/07, Jeff Squyres wrote: Does anyone have any opinions on this? If not, I'll go implement option #1. Sorry, Jeff... just reading this. I think your option #1 is the better. However, I want to warn you about to issues: * In my Linux FC6 box, malloc(0) return different pointers for each

Re: [OMPI devel] Fwd: [Open MPI] #1101: MPI_ALLOC_MEM with 0 size must be valid

2007-07-24 Thread Lisandro Dalcin
Per Lisandro's comments: I think that if you need a random/valid value for an STL map (or similar), malloc(0) is not a good idea to use as a key. OK, regarding comments in this thread, you are completelly right. I am fine with returning NULL. BTW, should'nt this issue be commented in the standa

Re: [OMPI devel] MPI_ALLOC_MEM warning when requesting 0 (zero) bytes

2007-07-25 Thread Lisandro Dalcin
On 7/23/07, Jeff Squyres wrote: I think that this will require a little tomfoolery to fix properly because we can't simply return NULL (you can't expect to use the pointer that we return to store anything, but you should be able to expect to be able to dereference it without seg faulting). Exc

Re: [OMPI devel] MPI_ALLOC_MEM warning when requesting 0 (zero) bytes

2007-07-26 Thread Lisandro Dalcin
On 7/25/07, Jeff Squyres wrote: Be sure to read this thread in order -- the conclusion of the thread was that we now actually *do* return NULL, per POSIX advice. OK, I got confused. And now, MPI_Free_mem is going to fail with a NULL pointer? Not sure what POSIX says, but then OMPI should also

[OMPI devel] MPI_Win_get_group

2007-07-27 Thread Lisandro Dalcin
The MPI-2 standard says (see bottom of ) MPI_WIN_GET_GROUP returns a duplicate of the group of the communicator used to create the window. associated with win. The group is returned in group. Pease, note the 'duplicate' ... Well, it

Re: [OMPI devel] MPI_Win_get_group

2007-07-28 Thread Lisandro Dalcin
On 7/28/07, Brian Barrett wrote: > In my opinion, we conform to the standard. We reference count the > group, it's incremented on call to MPI_WIN_GROUP, and you can safely > call MPI_GROUP_FREE on the group returned from MPI_WIN_GROUP. Groups > are essentially immutable, so there's no way I can

[OMPI devel] MPI_Comm_free with MPI_COMM_SELF

2007-07-28 Thread Lisandro Dalcin
I tried to free COMM_SELF, and it seems to call the error handler attached to COMM_WORLD. Is this intended? Should'nt OMPI use the error handler to COMM_SELF? As reference, I tried this with MPICH2, and of course the call fails, but using the error handler in COMM_SELF. Again, this is a new corne

[OMPI devel] freeing GROUP_EMPTY

2007-07-28 Thread Lisandro Dalcin
A simple test trying to free GROUP_EMPTY failed with the following trace. a.out: ../opal/class/opal_object.h:403: opal_obj_run_destructors: Assertion `((void *)0) != object->obj_class' failed. [trantor:19821] *** Process received signal *** [trantor:19821] Signal: Aborted (6) [trantor:19821] Signa

Re: [OMPI devel] MPI_Win_get_group

2007-07-30 Thread Lisandro Dalcin
On 7/29/07, Jeff Squyres wrote: > On Jul 28, 2007, at 4:41 PM, Lisandro Dalcin wrote: > > > In the mean time, I would prefer to follow the standard as close as > > possible. If not, some external, stupid test suite (like the one I > > have for mip4py) would report that

[OMPI devel] looking up service

2007-07-30 Thread Lisandro Dalcin
MPI_Lookup_name() is supposed to work on v1.2 branch? I cannot get it working (it fails with MPI_ERR_NAME). -- Lisandro Dalcín --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacion

Re: [OMPI devel] MPI_Win_get_group

2007-07-30 Thread Lisandro Dalcin
On 7/30/07, George Bosilca wrote: > In the data-type section there is an advice to implementors that > state that a copy can simply increase the reference count if > applicable. So, we might want to apply the same logic here ... BTW, you just mentioned other obscure case. Do this apply to NAMED d

Re: [OMPI devel] MPI_Win_get_group

2007-07-31 Thread Lisandro Dalcin
On 7/31/07, Dries Kimpe wrote: > The MPI_File_get_view description in the standard has some issues related > to copies and named datatypes: > > see > http://www-unix.mcs.anl.gov/~gropp/projects/parallel/MPI/mpi-errata/discuss/fileview/fileview-1-clean.txt Indeed, your comment was exactly the sour

Re: [OMPI devel] MPI_Win_get_group

2007-07-31 Thread Lisandro Dalcin
On 7/31/07, Jeff Squyres wrote: > Just curious -- why do you need to know if a handle refers to a > predefined object? If I understand correctly, new handles shoud be freed in order to do not leak things, to follow good programming practices, and being completelly sure a valgrind run do not repor

Re: [OMPI devel] MPI_Win_get_group

2007-08-06 Thread Lisandro Dalcin
On 8/1/07, Jeff Squyres wrote: > On Jul 31, 2007, at 6:43 PM, Lisandro Dalcin wrote: >> having to call XXX.Free() for every > > object i get from a call like XXX.Get_something() is really an > > unnecesary pain. > > Gotcha. > > But I don't see why this

Re: [OMPI devel] MPI_Win_get_group

2007-08-07 Thread Lisandro Dalcin
On 8/6/07, Jeff Squyres wrote: > On Aug 6, 2007, at 2:42 PM, Lisandro Dalcin wrote: > > Because many predefined, intrinsic objects cannot (or should not be > > able to) be freed, acording to the standard. > > I understand that. :-) But why would you call XXX.Free() on an &g

Re: [OMPI devel] MPI_Win_get_group

2007-08-07 Thread Lisandro Dalcin
On 8/1/07, Jeff Squyres wrote: > BTW, I totally forgot to mention a notable C++ MPI bindings project > that is the next-generation/successor to OMPI: the Boost C++ MPI > bindings (boost.mpi). > > http://www.generic-programming.org/~dgregor/boost.mpi/doc/ > > I believe there's also python bind

Re: [OMPI devel] [OMPI users] Possible Memcpy bug in MPI_Comm_split

2007-08-17 Thread Lisandro Dalcin
On 8/16/07, George Bosilca wrote: > Well, finally someone discovered it :) I know about this problem for > quite a while now, it pop up during our own valgrind test of the > collective module in Open MPI. However, it never create any problems > in the applications, at least not as far as I know. T

[OMPI devel] MPI_GROUP_EMPTY and MPI_Group_free()

2007-12-04 Thread Lisandro Dalcin
Dear all, As I see some activity on a related ticked, below some comments I sended to Bill Gropp some days ago about this subject. Bill did not write me back, I know he is really busy. Group operations are supposed to return new groups, so the used has to free the result. Additionally, the standa

[OMPI devel] valgrind warnings (uninited mem passed to syscall)

2007-12-17 Thread Lisandro Dalcin
Dear all, I'm getting valgrind warnings related to syscalls with uninitialized memory (with release 1.2.4). Before providing more details and code reproducing the problem, I would like to know if there is any configure option I should take care of which enables extra memory initialization (--enab

[OMPI devel] some possible bugs after trying 1.2.6

2008-04-14 Thread Lisandro Dalcin
Hi all, I've just downloaded and installed release 1.2.6. Additionally, I'm reimplementing from scratch my Python wrappers for MPI using some more advanded tools than manual C coding. Now, I do not try in any way of doing argument checking as I did before. Then I've ran al my unittest machinger. An

[OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-26 Thread Lisandro Dalcin
;, ni, na, nd, combiner); MPI_Type_free(&datatype); MPI_Finalize(); return 0; } $ mpicc type_hindexed_block.c $ ./a.out ni=7 na=5 nd=1 combiner=18 -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Por

[OMPI devel] Comm_split_type(COMM_SELF, MPI_UNDEFINED, ...)

2014-08-26 Thread Lisandro Dalcin
SELF [kw2060:9865] *** MPI_ERR_ARG: invalid argument of some other kind [kw2060:9865] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [kw2060:9865] ***and potentially your MPI job) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Scien

[OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-26 Thread Lisandro Dalcin
ompleted successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdu

[OMPI devel] Neighbor collectives with periodic Cartesian topologies of size one

2014-08-26 Thread Lisandro Dalcin
;argc, &argv); MPI_Cart_create(MPI_COMM_SELF, ndims, dims, periods, 0, &comm); MPI_Neighbor_allgather(&sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, comm); {int i; for (i=0;i<5;i++) printf("%d ",recvbuf[i]); printf(&quo

[OMPI devel] malloc 0 warnings

2014-08-26 Thread Lisandro Dalcin
debug: Request for 0 bytes (osc_rdma_active_target.c, 74) -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kau

Re: [OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-26 Thread Lisandro Dalcin
ould it be related to automake 13 instead of 12 ? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.edu.

Re: [OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-26 Thread Lisandro Dalcin
pecifically at MPI_Finalize(). Caching duplicated communicators is a key feature in many libraries. How do you propose to handle the deallocation of the duped communicators when COMM_WORLD is involved? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sc

Re: [OMPI devel] MPI calls in callback functions during MPI_Finalize()

2014-08-27 Thread Lisandro Dalcin
s already bad habit, > which is rightfully punished by Open MPI. > After much thinking about it, I must surrender :-), you were right. Sorry for the noise. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical

Re: [OMPI devel] Envelope of HINDEXED_BLOCK

2014-08-27 Thread Lisandro Dalcin
e error message is from libtoolize about a file missing from the libtool > installation directory. > So, this looks (to me) like a mis-installation of libtool. > Of course, after $ sudo yum install libtool-ltdl-devel in my Fedora 20 box, everything went fine. Sorry for the noise.

Re: [OMPI devel] malloc 0 warnings

2014-08-27 Thread Lisandro Dalcin
On 27 August 2014 02:38, Jeff Squyres (jsquyres) wrote: > If you have reproducers, yes, that would be most helpful -- thanks. > OK, here you have something to start. To be fair, this is a reduction with zero count. I have many other tests for reductions with zero count that are failing. Does Ope

Re: [OMPI devel] malloc 0 warnings

2014-08-27 Thread Lisandro Dalcin
On 27 August 2014 02:38, Jeff Squyres (jsquyres) wrote: > If you have reproducers, yes, that would be most helpful -- thanks. > Here you have another one... $ cat igatherv.c #include int main(int argc, char *argv[]) { signed char a=1,b=2; int rcounts[1] = {0}; int rdispls[1] = {0}; MPI_

[OMPI devel] Valgrind warning in MPI_Win_allocate[_shared]()

2014-09-28 Thread Lisandro Dalcin
goto error; } if (blocking_fence) { -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technology (KAUST) http://numpor.kaust.e

Re: [OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-09-28 Thread Lisandro Dalcin
d: 1 $ mpicc -DNBCOLL=1 ireduce_scatter.c && mpiexec -n 1 ./a.out [0] rbuf[0]=60 expected: 1 -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Sci

Re: [OMPI devel] Neighbor collectives with periodic Cartesian topologies of size one

2014-09-28 Thread Lisandro Dalcin
On 25 September 2014 20:50, Nathan Hjelm wrote: > On Tue, Aug 26, 2014 at 07:03:24PM +0300, Lisandro Dalcin wrote: >> I finally managed to track down some issues in mpi4py's test suite >> using Open MPI 1.8+. The code below should be enough to reproduce the >> problem

Re: [OMPI devel] Different behaviour with MPI_IN_PLACE in MPI_Reduce_scatter() and MPI_Ireduce_scatter()

2014-12-23 Thread Lisandro Dalcin
ted: 0 $ mpiexec -n 1 ./a.out [0] rbuf[0]= 0 expected: 1 The last one is wrong. Not sure what's going on. Am I missing something? -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPo

[OMPI devel] Warnings about malloc(0) in debug build

2015-05-07 Thread Lisandro Dalcin
for 0 bytes (coll_libnbc_ireduce_scatter_block.c, 67) malloc debug: Request for 0 bytes (nbc_internal.h, 505) malloc debug: Request for 0 bytes (osc_rdma_active_target.c, 74) malloc debug: Request for 0 bytes (osc_rdma_active_target.c, 76) -- Lisandro Dalcin Research Scientist Com

[OMPI devel] Issues with MPI_Type_create_f90_{real|complex}

2015-05-07 Thread Lisandro Dalcin
by 0x4008BA: main (in /home/dalcinl/Devel/BUGS-MPI/openmpi/a.out) ==1025== -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Media Center (NumPor) King Abdullah University of Science and Technolog

  1   2   >