Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-11-14 Thread Y.MATSUMOTO
Dear Open MPI community, I'm a member of MPI library development team in Fujitsu, Takahiro Kawashima, who sent mail before, is my colleague. We start to feed back. First, we fixed about MPI_LB/MPI_UB and data packing problem. Program crashes when it meets all of the following conditions: a: The

[OMPI devel] Incorrect and undefined return code/function/data type at C++ header

2011-12-04 Thread Y.MATSUMOTO
Dear all, We send next feed back. It's about C++ header file. In ompi/mpi/cxx/*.h, Some definitions of return code, type and function are lacked or incorrect. Attached patch fixes them (This Patch is for V1.4.X). Following list is what is lacked and incorrect. *Undefined return code --

Re: [OMPI devel] Incorrect and undefined return code/function/data type at C++ header

2011-12-08 Thread Y.MATSUMOTO
7;t see it in the patch, either). On Dec 4, 2011, at 9:31 PM, Y.MATSUMOTO wrote: Dear all, We send next feed back. It's about C++ header file. In ompi/mpi/cxx/*.h, Some definitions of return code, type and function are lacked or incorrect. Attached patch fixes them (This Patch is for

Re: [OMPI devel] Incorrect and undefined return code/function/data type at C++ header

2011-12-13 Thread Y.MATSUMOTO
Dear All, I fixed the patch. (MPI::Fint etc.) So, please replace the patch. Best regards. --- Yuki MATSUMOTO MPI development team, Fujitsu (2011/12/09 11:35), Y.MATSUMOTO wrote: Dear Jeff and all, Thank you for your comment. I'm sorry for not replying sooner. 1:MPI::Fi

[OMPI devel] Gather(linear_sync) is truncated using derived data type

2012-01-16 Thread Y.MATSUMOTO
Dear All, Next feed back is about MPI_Gather problem. Gather may be truncated in following condition: 1:ompi_coll_tuned_gather_intra_linear_sync is called. (message size is over 6000B) 2:Either send data type or recv data type is derived type and other data type is predefined data type. Truncat

[OMPI devel] Violating standard in MPI_Close_port

2012-01-20 Thread Y.MATSUMOTO
Dear All, Next is question about "MPI_Close_port". According to the MPI-2.2 standard, the "port_name" argument of MPI_Close_port() is marked as 'IN'. But, in Open MPI (both trunk and 1.4.x), the content of "port_name" is updated in MPI_Close_port(). It seems to violate the MPI standard. The foll

[OMPI devel] [PATCH] MPI_FILE_SEEK_SHARED is wrong in Fortran

2012-01-25 Thread Y.MATSUMOTO
Dear All, Next is about "MPI_FILE_SEEK_SHARED" in Fortran. When MPI_FILE_SEEK_SHARED is called in Fortran Program, the shared file pointer is not updated. Incorrent function call is the following part: ompi/mpi/f77/file_seek_shared_f.c--- 60 void mpi_file_seek_shared_f(MPI_Fint *fh

[OMPI devel] [PATCH]Some typos in error code, func_name and man

2012-01-25 Thread Y.MATSUMOTO
Dear All, We found some typos in error code/func_name/man. Attached three patches fix them(Patch is for in V1.4x). Best regards, Yuki MATSUMOTO MPI development team, Fujitsu Index: ompi/errhandler/errcode-internal.c === --- ompi/err

[OMPI devel] [PATCH]Segmentation Fault occurs when the function called from MPI_Comm_spawn_multiple fails

2012-02-09 Thread Y.MATSUMOTO
Dear All, Next feedback is "MPI_Comm_spawn_multiple". When the function called from MPI_Comm_spawn_multiple failed, Segmentation fault occurs. In that condition, "newcomp" sets NULL. But member of "newcomp" is referred at following part. (ompi/mpi/c/comm_spawn_multiple.c) 176 /* set array of

[OMPI devel] [PATCH]Incorrect algorithm choice using coll_tuned_dynamic_rules_filename (over 2GiB message)

2012-03-01 Thread Y.MATSUMOTO
Dear All, Next feedback is about "coll_tuned_dynamic_rules_filename". Incorrect algorithm is selected in following conditions: 1:"--mca coll_tuned_use_dynamic_rules 1" is set. 2:"--mca coll_tuned_dynamic_rules_filename" is set. 3: Collective communication which is written in 2, called >= 2GiB com

[OMPI devel] Collective communications may be abend when it use over 2GiB buffer

2012-03-05 Thread Y.MATSUMOTO
Dear All, Next feedback is about "collective communications". Collective communication may be abend when it use over 2GiB buffer. This problem occurs following condition: -- communicator_size * count(scount/rcount) >= 2GiB It occurs in even small PC cluster. The following is one of the suspiciou