[OMPI devel] Fwd: MPI_GROUP_TRANSLATE_RANKS (again)

2006-10-19 Thread Lisandro Dalcin

I've successfully installed the just released 1.1.2. So I go for a new
round catching bugs, non standard behavior, or just what could be seen
as convenient features.

The problem I've reported with MPI_GROUP_TRANSLATE_RANKS was
corrected. However, looking at MPI-2 errata documment, it says:

Add to page 36, after 3.2.11 (above)

3.2.12 MPI_GROUP_TRANSLATE_RANKS and MPI_PROC_NULL

MPI_PROC_NULL is a valid rank for input to MPI_GROUP_TRANSLATE_RANKS,
which returns MPI_PROC_NULL as the translated rank.

But it seems it returns MPI_UNDEFINED in this case. Try yourself:

In [1]: from mpi4py import MPI

In [2]: group = MPI.COMM_WORLD.Get_group()

In [3]: MPI.Group.Translate_ranks(group, [MPI.PROC_NULL], group)
Out[3]: [-32766]

In [4]: MPI.UNDEFINED
Out[4]: -32766


Additionaly, OMPI segfaults if the group is MPI_GROUP_EMPY. Try yourself

In [5]: group = MPI.GROUP_EMPTY

In [6]: MPI.Group.Translate_ranks(group, [MPI.PROC_NULL], group)
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0xfff8
[0] func:/usr/local/openmpi/1.1.2/lib/libopal.so.0 [0xba1dfc]
[1] func:[0xe67440]
[2] func:/usr/local/openmpi/1.1.2/lib/libmpi.so.0(MPI_Group_translate_ranks+0xaa
) [0x5f0786]
[3] func:/u/dalcinl/lib/python/mpi4py/_mpi.so [0xa5a6c6]
[4] func:/usr/local/lib/libpython2.4.so.1.0(PyCFunction_Call+0x66) [0x1d5d66]
# more traceback .
[31] func:/usr/local/lib/libpython2.4.so.1.0 [0x20b009]
*** End of error message ***
Segmentation fault


--
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594



[OMPI devel] MPI_BUFFER_ATTACH/DETACH behaviour

2006-10-19 Thread Lisandro Dalcin

As a general idea and following similar MPI concepts, it can be really
useful if MPI_BUFFER_ATTACH/DETACH allowed a layered usage, inside
modules. That is, inside a call, a library can make a 'detach' and
cache it, next 'attach' an internally allocated resource, call BSEND,
'detach' it own resources, and finaly re-'attach' the original
resources. I've already disccussed this a bit with Bill Gropp,
regarding MPICH2 behaviour.

So I would to propose the following:

1- MPI_BUFFER_ATTACH should attach the provided buffer, raising an
error if the provided size is less than BSEND_OVERHEAD (why to
postpone the error until MPI_Bsend?). Currently, the behavior is:

In [1]: from mpi4py import MPI

In [2]: mem = MPI.Alloc_mem(1)

In [3]: mem
Out[3]: 

In [4]: MPI.Attach_buffer(mem)

In [5]: MPI.BSEND_OVERHEAD
Out[5]: 128

Any subsequent MPI_BSEND is likely to fail for lack of buffer space. Am I right?

2- MPI_BUFFER_ATTACH should raise an error if a previous buffer was
attached. OMPI currently seems to work like this, however in a second
call to attach i get an error code -104, which I think is internal and
should be remaped to public range [SUCCESS, LASTCODE). See below, the
error string is generated by MY code, because I asumed as a genral
rule that calling MPI_GET_ERROR_STRING is unsafe with an out of range
error code.

In [6]: MPI.Attach_buffer(mem)
---
mpi4py.MPI.Exception Traceback (most
recent call last)
# more output 
Exception: unable to retrieve error string, ierr=-104 out of range
[MPI_SUCCESS=0, MPI_ERR_LASTCODE=54)


3 - MPI_BUFFER_DETACH should always success, even if there is no
buffer to detach. In the last case, it should return a null pointer,
and perhaps a zero size.

This way, inside a library routine we can safely call
MPI_BUFFER_DETACH, MPI_BUFFER_ATTACH/DETACH owned memory, and finally
test if original buffer (gotten in the initial call to detach) is
valid buy testing pointer or size.

Waiting for your comments...

Regards,

--
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594