Re: [OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Jeff Squyres (jsquyres)
We re-opened it, though. :) I saw Nathan fix it in the c bindings; I'm not sure if he fixed it in fortran yet. It's noted on the pull request, though. Yes, if you'd like to file directly on Github, that would be great. Sent from my phone. No type good. > On Feb 11, 2016, at 2:49 PM, Lisandr

Re: [OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Lisandro Dalcin
On 11 February 2016 at 14:41, Jeff Squyres (jsquyres) wrote: > Nope, this is not on purpose. I filed > https://github.com/open-mpi/ompi/issues/1355 to track the issue. > Oh! I was not aware you are now tracking issues in GitHub. I think you closed the issue too quickly :-) I added some addition

Re: [OMPI devel] MTT error?

2016-02-11 Thread Kim, DongInn
That is kind of odd. I believe that I put the tailing slash “/“ at the end of the redirecting URL. Anyway, I just restarted the apache daemon and it seems to be working fine now. Maybe I did not restart the daemon after adding the tailing slash? Regards, -- - DongInn > On Feb 11, 2016, at 11

Re: [OMPI devel] MTT error?

2016-02-11 Thread Jeff Squyres (jsquyres)
DongInn -- When you enabled the https redirects for mtt.open-mpi.org, it looks like there is a / missing in the redirect. > On Feb 11, 2016, at 11:49 AM, Howard Pritchard wrote: > > Hi Folks > > When I go to > > https://mtt.open-mpi.org/ > > and then click the summary button I get some kin

[OMPI devel] MTT error?

2016-02-11 Thread Howard Pritchard
Hi Folks When I go to https://mtt.open-mpi.org/ and then click the summary button I get some kind of DNS lookup error. Howard

[OMPI devel] Failure calling MPI_Type_set_attr(datatype, keyval, NULL)

2016-02-11 Thread Lisandro Dalcin
Despite working for communicators and windows, setting a NULL attribute value in datatypes fails with MPI_ERR_ARG. Run the attached test case to reproduce. -- Lisandro Dalcin Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Numerical Porous Med

Re: [OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Jeff Squyres (jsquyres)
Nope, this is not on purpose. I filed https://github.com/open-mpi/ompi/issues/1355 to track the issue. Thanks! > On Feb 11, 2016, at 3:15 AM, Lisandro Dalcin wrote: > > After writing some tests, I discovered Open MPI's MPI_Get_address() > fails if fed with MPI_BOTTOM. Is this on purpose of j

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Ralph Castain
I can’t speak to the packing question, but I can say that we have indeed confirmed the lack of maintenance on OMPI for Debian/Ubuntu and are working to resolve the problem. > On Feb 11, 2016, at 1:16 AM, Gilles Gouaillardet > wrote: > > Michael, > > MPI_Pack_external must convert data to big

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Gilles Gouaillardet
Michael, MPI_Pack_external must convert data to big endian, so it can be dumped into a file, and be read correctly on big and little endianness arch, and with any MPI flavor. if you use only one MPI library on one arch, or if data is never read/written from/to a file, then it is more efficient to

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Michael Rezny
Hi Gilles, I enhanced my simple test program to dump the contents of the buffer: If I am not mistaken, it appears that the unpack is not doing the endian conversion. kindest regards Mike Good: send data 04d2 162e MPI_Pack_external: 0 buffer size: 8 Buffer contents d2, 04, 00, 00, 2e, 16,

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Michael Rezny
Hi Gilles, thanks for thinking about this in more detail. I understand what you are saying, but your comments raise some questions in my mind: If one is in a homogeneous cluster, is it important that, in the case of little-endian, that the data be converted to extern32 format (big-endian), only

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Gilles Gouaillardet
Michael, I think it is worst than that ... without --enable-heterogeneous, it seems the data is not correctly packed (e.g. it is not converted to big endian), at least on a x86_64 arch. unpack looks broken too, but pack followed by unpack does work. that means if you are reading data correctly wr

[OMPI devel] MPI_Get_address() with MPI_BOTTOM

2016-02-11 Thread Lisandro Dalcin
After writing some tests, I discovered Open MPI's MPI_Get_address() fails if fed with MPI_BOTTOM. Is this on purpose of just an error checking oversight? $ cat get_address.c #include int main(int argc, char *argv[]) { MPI_Aint addr; MPI_Init(&argc, &argv); MPI_Get_address(MPI_BOTTOM, &addr

Re: [OMPI devel] Error using MPI_Pack_external / MPI_Unpack_external

2016-02-11 Thread Michael Rezny
Hi Ralph, you are indeed correct. However, many of our users have workstations such as me, with OpenMPI provided by installing a package. So we don't know what has been configured. Then we have failures, since, for instance, Ubuntu 14.04 by default appears to have been built with heterogeneous sup