Re: [OMPI devel] Open MPI Java MPI bindings

2022-08-10 Thread t-kawashima--- via devel
s not receive Java ones recently. So users are few, if any. We Fujitsu don't mind dropping Java binding in Open MPI v5.0.x. Thanks, Takahiro Kawashima, Fujitsu > During a planning meeting for Open MPI v5.0.0 today, the question came up: is > anyone using the Open MPI Java bindings? > &

Re: [OMPI devel] Script-based wrapper compilers

2022-04-04 Thread t-kawashima--- via devel
rch64 libraries and x86_64 opal_wrapper and writing wrapper-data.txt allows cross compiling AArch64 MPI programs on x86_64. Thanks, Takahiro Kawashima, Fujitsu > Jeff, > > Cross compilation is the recommended way on Fugaku. > In all fairness, even if Fujitsu MPI is based on Open MP

Re: [OMPI devel] subcommunicator OpenMPI issues on K

2017-11-07 Thread Kawashima, Takahiro
> > As other people said, Fujitsu MPI used in K is based on old > > Open MPI (v1.6.3 with bug fixes). > > I guess the obvious question is will the vanilla Open-MPI work on K? Unfortunately no. Support of Tofu and Fujitsu resource manager are not included in Open MPI. Takah

Re: [OMPI devel] subcommunicator OpenMPI issues on K

2017-11-07 Thread Kawashima, Takahiro
used in K is based on old Open MPI (v1.6.3 with bug fixes). We don't have a plan to update it to newer version because it is in a maintenance phase regarding system softwares. At first glance, I also suspect the cost of multiple allreduce. Takahiro Kawashima, MPI development team, Fujitsu >

Re: [OMPI devel] [2.1.2rc2] CMA build failure on Linux/SPARC64

2017-08-21 Thread Kawashima, Takahiro
Paul, Thank you. I created an issue and PRs (v2.x and v2.0.x). https://github.com/open-mpi/ompi/issues/4122 https://github.com/open-mpi/ompi/pull/4123 https://github.com/open-mpi/ompi/pull/4124 Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > This is a D

Re: [OMPI devel] [2.1.2rc2] CMA build failure on Linux/SPARC64

2017-08-21 Thread Kawashima, Takahiro
d984b4b patch? I cannot test it because I cannot update glibc. If it is fine, I'll create a PR for v2.x branch. https://github.com/open-mpi/ompi/commit/d984b4b Takahiro Kawashima, MPI development team, Fujitsu > Two things to note: > > 1) This is *NOT* present in 3.0.0rc2, t

Re: [OMPI devel] [3.0.0rc1] ppc64/gcc-4.8.3 check failure (regression).

2017-07-03 Thread Kawashima, Takahiro
It might be related to https://github.com/open-mpi/ompi/issues/3697 . I added a comment to the issue. Takahiro Kawashima, Fujitsu > On a PPC64LE w/ gcc-7.1.0 I see opal_fifo hang instead of failing. > > -Paul > > On Mon, Jul 3, 2017 at 4:39 PM, Paul Hargrove wrote: > > &

Re: [OMPI devel] bug in MPI_Comm_accept?

2017-04-04 Thread Kawashima, Takahiro
I filed a PR against v1.10.7 though v1.10.7 may not be released. https://github.com/open-mpi/ompi/pull/3276 I'm not aware of v2.1.x issue, sorry. Other developer may be able to answer. Takahiro Kawashima, MPI development team, Fujitsu > Bullseye! > > Thank you, Takahiro,

Re: [OMPI devel] bug in MPI_Comm_accept?

2017-04-04 Thread Kawashima, Takahiro
MPI_COMM_SPAWN, MPI_COMM_SPAWN_MULTIPLE, MPI_COMM_ACCEPT, and MPI_COMM_CONNECT. Takahiro Kawashima, MPI development team, Fujitsu > Dear Developers, > > This is an old problem, which I described in an email to the users list > in 2015, but I continue to struggle with it. In short, MPI

Re: [OMPI devel] MCA Component Development: Function Pointers

2017-01-18 Thread Kawashima, Takahiro
Hi, I created a pull request to add the persistent collective communication request feature to Open MPI. Though it's incomplete and will not be merged into Open MPI soon, you can play your collective algorithms based on my work. https://github.com/open-mpi/ompi/pull/2758 Takahiro Kawa

Re: [OMPI devel] Open MPI 2.0.0: Fortran with NAG compiler (nagfor)

2016-08-23 Thread Kawashima, Takahiro
Gilles, Jeff, In Open MPI 1.6 days, MPI_ARGVS_NULL and MPI_STATUSES_IGNORE were defined as double precision and MPI_Comm_spawn_multiple and MPI_Waitall etc. interfaces had two subroutines each. https://github.com/open-mpi/ompi-release/blob/v1.6/ompi/include/mpif-common.h#L148 https://github

Re: [OMPI devel] Class information in OpenMPI

2016-07-07 Thread KAWASHIMA Takahiro
ram” for OpenMPI code base that shows > existing classes and dependencies/associations. Are there any available tools > to extract and visualize this information. Thanks, KAWASHIMA Takahiro

Re: [OMPI devel] Missing support for 2 types in MPI_Sizeof()

2016-04-15 Thread Kawashima, Takahiro
> I just checked MPICH 3.2, and they *do* include MPI_SIZEOF interfaces for > CHARACTER and LOGICAL, but they are missing many of the other MPI_SIZEOF > interfaces that we have in OMPI. Meaning: OMPI and MPICH already diverge > wildly on MPI_SIZEOF. :-\ And OMPI 1.6 also had MPI_SIZEOF interf

Re: [OMPI devel] Fwd: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Kawashima, Takahiro
Gilles, I see. Thanks! Takahiro Kawashima, MPI development team, Fujitsu > Kawashima-san, > > we always duplicate the communicator, and use the CID of the duplicated > communicator, so bottom line, > there cannot be more than one window per communicator. > > i will dou

Re: [OMPI devel] Fwd: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Kawashima, Takahiro
. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Hmm, I think you are correct. There may be instances where two different > local processes may use the same CID for different communicators. It > should be sufficient to add the PID of the current process to the > filename to

Re: [OMPI devel] Please test: v1.10.1rc3

2015-10-30 Thread Kawashima, Takahiro
`configure && make && make install && make check` and running some sample MPI programs succeeded with 1.10.1rc3 on my SPARC-V9/Linux/GCC machine (Fujitsu PRIMEHPC FX10). No @SET_MAKE@ appears in any Makefiles, of course. > > For the first time I was also able to (attempt to) test SPARC64 via QEMU

Re: [OMPI devel] RFC: Remove --without-hwloc configure option

2015-09-04 Thread Kawashima, Takahiro
Brice, I'm a developer of Fujitsu MPI for K computer and Fujitsu PRIMEHPC FX10/FX100 (SPARC-based CPU). Though I'm not familiar with the hwloc code and didn't know the issue reported by Gilles, I also would be able to help you to fix the issue. Takahiro Kawashima, MPI developmen

Re: [OMPI devel] OpenMPI 1.8 Bug Report

2015-08-27 Thread Kawashima, Takahiro
orm MPI_Buffer_detach. The declaration of MPI_Win_detach is not changed since the one-sided code was merged into the trunk at commit 49d938de (svn r30816). Regards, Takahiro Kawashima > iirc, the MPI_Win_detach discrepancy with the standard is intentional in > fortran 2008, > there is a comment i

Re: [OMPI devel] OpenMPI 1.8 Bug Report

2015-08-27 Thread Kawashima, Takahiro
Oh, I also noticed it yesterday and was about to report it. And one more, the base parameter of MPI_Win_detach. Regards, Takahiro Kawashima > Dear OpenMPI developers, > > I noticed a bug in the definition of the 3 MPI-3 RMA functions > MPI_Compare_and_swap, MPI_Fetch_and_op and MPI

Re: [OMPI devel] v1.10.0rc1 available for testing

2015-07-17 Thread Kawashima, Takahiro
Hi folks, `configure && make && make install && make test` and running some sample MPI programs succeeded with 1.10.0rc1 on my SPARC-V9/Linux/GCC machine (Fujitsu PRIMEHPC FX10). Takahiro Kawashima, MPI development team, Fujitsu > Hi folks > > Now that 1.8.7 i

Re: [OMPI devel] RFC: standardize verbosity values

2015-06-08 Thread KAWASHIMA Takahiro
formation that may be useful for users and developers. Not so verbose. Output only on initialization or object creation etc. DEBUG: Information that is useful only for developers. Not so verbose. Output once per MPI routine call. TRACE: Information that is useful only for developers. V

Re: [OMPI devel] c_accumulate

2015-04-20 Thread Kawashima, Takahiro
sufficient. But an easy implementation is using a barrier. Thanks, Takahiro Kawashima, > Kawashima-san, > > i am confused ... > > as you wrote : > > > In the MPI_MODE_NOPRECEDE case, a barrier is not necessary > > in the MPI implementation to end access/exposur

Re: [OMPI devel] c_accumulate

2015-04-20 Thread Kawashima, Takahiro
Hi Gilles, Nathan, No, my conclusion is that the MPI program does not need a MPI_Barrier but MPI implementations need some synchronizations. Thanks, Takahiro Kawashima, > Kawashima-san, > > Nathan reached the same conclusion (see the github issue) and i fixed > the test > by ma

Re: [OMPI devel] c_accumulate

2015-04-20 Thread Kawashima, Takahiro
wait rank 1's MPI_WIN_FENCE.) I think this is the intent of the sentence in the MPI standard cited above. Thanks, Takahiro Kawashima > Hi Rolf, > > yes, same issue ... > > i attached a patch to the github issue ( the issue might be in the test). > > From th

Re: [OMPI devel] Opal atomics question

2015-03-26 Thread Kawashima, Takahiro
Yes, Fujitsu MPI is running on sparcv9-compatible CPU. Though we currently use only stable-series (v1.6, v1.8), they work fine. Takahiro Kawashima, MPI development team, Fujitsu > Nathan, > > Fujitsu MPI is openmpi based and is running on their sparcv9 like proc. > > Chee

Re: [OMPI devel] Problem on MPI_Type_create_resized and multiple BTL modules

2014-11-30 Thread Kawashima, Takahiro
Thanks! > Takahiro, > > Sorry for the delay in answering. Thanks for the bug report and the patch. > I applied you patch, and added some tougher tests to make sure we catch > similar issues in the future. > > Thanks, > George. > > > On Mon, Sep 29, 2014 at 8

[OMPI devel] [patch] libnbc intercommunicator iallgather bug

2014-09-30 Thread Kawashima, Takahiro
ather_inter and iallgather_intra. The modification of iallgather_intra is just for symmetry with iallgather_inter. Users guarantee the consistency of send/recv. Both trunk and v1.8 branch have this issue. Regards, Takahiro Kawashima, MPI development team, Fujitsu #include #include #include "mpi.h"

[OMPI devel] Problem on MPI_Type_create_resized and multiple BTL modules

2014-09-29 Thread Kawashima, Takahiro
+ count; +pStack[1].disp = count; } pStack[1].index= 0; /* useless */ Best regards, Takahiro Kawashima, MPI development team, Fujitsu /* np=2 */ #include #include #include struct structure { double not_transfered; double transfered_1; double transfered_2;

Re: [OMPI devel] 1.8.3rc2 available

2014-09-26 Thread Kawashima, Takahiro
just FYI: configure && make && make install && make test succeeded on my SPARC64/Linux/GCC (both enable-debug=yes and no). Takahiro Kawashima, MPI development team, Fujitsu > Usual place: > > http://www.open-mpi.org/software/ompi/v1.8/ > > Please

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2 and gcc-4.9.0

2014-09-02 Thread Kawashima, Takahiro
Hi Siegmar, Ralph, I forgot to follow the previous report, sorry. The patch I suggested is not included in Open MPI 1.8.2. The backtrace Siegmar reported points the problem that I fixed in the patch. http://www.open-mpi.org/community/lists/users/2014/08/24968.php Siegmar: Could you try my patc

Re: [OMPI devel] bus error with openmpi-1.8.2rc4r32485 and gcc-4.9.0

2014-08-11 Thread Kawashima, Takahiro
ch to fix it in v1.8. My fix doesn't call dss but uses memcpy. I have confirmed it on SPARC64/Linux. Sorry to response so late. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Siegmar, Ralph, > > I'm sorry to response so late since last week. > > Ralph fixed

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2rc4r32485 and gcc-4.9.0

2014-08-11 Thread Kawashima, Takahiro
the custom patch just now. Wait wait a minute please. Takahiro Kawashima, MPI development team, Fujitsu > Hi, > > thank you very much to everybody who tried to solve my bus > error problem on Solaris 10 Sparc. I thought that you found > and fixed it, so that I installed openmpi-1.8.2r

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
Gilles, I applied your patch to v1.8 and it run successfully on my SPARC machines. Takahiro Kawashima, MPI development team, Fujitsu > Kawashima-san and all, > > Here is attached a one off patch for v1.8. > /* it does not use the __attribute__ modifier that might not be > s

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
restarts; -orte_process_name_t proc, dmn; +orte_process_name_t proc __attribute__((__aligned__(8))), dmn; char *hostname; uint8_t flag; opal_buffer_t *bptr; Takahiro Kawashima, MPI development team, Fujitsu > Kawashima-san, > > This is interesting :-) > >

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
opal.local.ldr",data=(void *) 0x07fede74,type=15:'\017') at line 252 in db_hash.c I want to dig this issue, but unfortunately I have no time today. My SPARC machines stop one hour later for the maintenance... Takahiro Kawashima, MPI development team, Fujitsu > I have an

Re: [OMPI devel] [OMPI users] bus error with openmpi-1.8.2rc2 on Solaris 10 Sparc

2014-08-08 Thread Kawashima, Takahiro
MPI v1.8 branch r32447 (latest) configure --enable-debug SPARC-V9 (Fujitsu SPARC64 IXfx) Linux (custom) gcc 4.2.4 I could not reproduce it with Open MPI trunk nor with Fujitsu compiler. Can this information help? Takahiro Kawashima, MPI development team, Fujitsu > Hi, > > I'

Re: [OMPI devel] RFC: add atomic compare-and-swap that returns old value

2014-08-01 Thread Kawashima, Takahiro
-compiling environment. They all passed correctly. P.S. I cannot reply until the next week if you request me something because it's COB in Japan now, sorry. Takahiro Kawashima, MPI development team, Fujitsu > In case someone else want to play with the new atomics here is the most > up-to-

Re: [OMPI devel] MPI_T SEGV on DSO

2014-07-30 Thread KAWASHIMA Takahiro
flag. Regards, KAWASHIMA Takahiro > This is odd. The variable in question is registered by the MCA itself. I > will take a look and see if I can determine why it isn't being > deregistered correctly when the rest of the component's parameters are. > > -Nathan > > On W

Re: [OMPI devel] MPI_T SEGV on DSO

2014-07-29 Thread KAWASHIMA Takahiro
with_wrapper_cxxflags=-g with_wrapper_fflags=-g with_wrapper_fcflags=-g Regards, KAWASHIMA Takahiro > The problem is the code in question does not check the return code of > MPI_T_cvar_handle_alloc . We are returning an error and they still try > to use the handle (which is stale). Uncom

[OMPI devel] MPI_T SEGV on DSO

2014-07-29 Thread KAWASHIMA Takahiro
ned to kernel, and abnormal values are printed if not yet. So this SEGV doesn't occur if I configure Open MPI with --disable-dlopen option. I think it's the reason why Nathan doesn't see this error. Regards, KAWASHIMA Takahiro

[OMPI devel] [patch] man and FUNC_NAME corrections

2014-07-09 Thread Kawashima, Takahiro
7;B.3 Changes from Version 2.0 to Version 2.1' (page 766) in MPI-3.0. Though my patch is for OMPI trunk, I want to see these corrections in 1.8 series. Takahiro Kawashima, MPI development team, Fujitsu Index: ompi/mpi/c/mes

[OMPI devel] [patch] async-signal-safe signal handler

2013-12-11 Thread Kawashima, Takahiro
which are meaningless for write(2) system call but might cause a similar problem. What do you think about this patch? Takahiro Kawashima, MPI development team, Fujitsu Index: opal/mca/backtrace/backtrace.h === --- opal/mca/back

Re: [OMPI devel] 1.6.5 large matrix test doesn't pass (decode) ?

2013-10-04 Thread KAWASHIMA Takahiro
It is a bug in the test program, test/datatype/ddt_raw.c, and it was fixed at r24328 in trunk. https://svn.open-mpi.org/trac/ompi/changeset/24328 I've confirmed the failure occurs with plain v1.6.5 and it doesn't occur with patched v1.6.5. Thanks, KAWASHIMA Takahiro > Not su

Re: [OMPI devel] [patch] MPI_IN_PLACE for MPI_ALLTOALL(V|W)

2013-09-17 Thread Kawashima, Takahiro
Thanks! Takahiro Kawashima, MPI development team, Fujitsu > Pushed in r29187. > > George. > > > On Sep 17, 2013, at 12:03 , "Kawashima, Takahiro" > wrote: > > > George, > > > > Copyright-added patch is attached. > > I don't

Re: [OMPI devel] [patch] MPI_IN_PLACE for MPI_ALLTOALL(V|W)

2013-09-17 Thread Kawashima, Takahiro
d the contribution agreement. I must talk with the legal department again to sign it, sigh This patch is very trivial and so no issues will arise. Thanks, Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > Good catches. It's absolutely amazing that some of these errors la

[OMPI devel] [patch] MPI_IN_PLACE for MPI_ALLTOALL(V|W)

2013-09-17 Thread Kawashima, Takahiro
ecvbuf + rdispls[i] · extent(recvtype), recvcounts[i], recvtype, i, ...). I attached his patch (alltoall-inplace.patch) to fix these three bugs. Takahiro Kawashima, MPI development team, Fujitsu Index: ompi/mca/coll/self/coll_self_allt

[OMPI devel] [bugs] OSC-related datatype bugs

2013-09-04 Thread Kawashima, Takahiro
or recreating datatype", or "received packet for Window with unknown type", if you use MPI_UB in OSC, like the attached program osc_ub.c. Regards, Takahiro Kawashima, MPI development team, Fujitsu Index: ompi/d

Re: [OMPI devel] [bug] One-sided communication with a duplicated datatype

2013-07-15 Thread KAWASHIMA Takahiro
George, Thanks. I've confirmed your patch. I wrote a simple program to test your patch and no problems are found. The test program is attached to this mail. Regards, KAWASHIMA Takahiro > Takahiro, > > Please find below another patch, this time hopefully fixing all issues. The

Re: [OMPI devel] [bug] One-sided communication with a duplicated datatype

2013-07-14 Thread KAWASHIMA Takahiro
George, A improved patch is attached. Latter half is same as your patch. But again, I'm not sure this is a correct solution. It works correctly for my attached put_dup_type_3.c. Run as "mpiexec -n 1 ./put_dup_type_3". It will print seven OKs if succeeded. Regards, KAWASHIMA Tak

Re: [OMPI devel] [bug] One-sided communication with a duplicated datatype

2013-07-14 Thread KAWASHIMA Takahiro
No. My patch doesn't work for a more simple case, just a duplicate of MPI_INT. Datatype is too complex for me ... Regards, KAWASHIMA Takahiro > George, > > Thanks. But no, your patch does not work correctly. > > The assertion failure disappeared by your patch but the v

Re: [OMPI devel] [bug] One-sided communication with a duplicated datatype

2013-07-14 Thread KAWASHIMA Takahiro
t;total_pack_size = 0; break; case MPI_COMBINER_CONTIGUOUS: This patch in addition to your patch works correctly for my program. But I'm not sure this is a correct solution. Regards, KAWASHIMA Takahiro > Takahiro, > > Nice catch. That particular code was an over-opt

[OMPI devel] [bug] One-sided communication with a duplicated datatype

2013-07-14 Thread KAWASHIMA Takahiro
types and the calculation of total_pack_size is also involved. It seems not so simple. Regards, KAWASHIMA Takahiro #include #include #include #define PRINT_ARGS #ifdef PRINT_ARGS /* defined in ompi/datatype/ompi_datatype_args.c */ extern int32_t ompi_datatype_print_args(const struct ompi_dat

Re: [OMPI devel] RFC MPI 2.2 Dist_graph addition

2013-07-01 Thread Kawashima, Takahiro
attached. test_1 and test_2 can run with nprocs=5, and test_3 and test_4 can run with nprocs>=3. Though I'm not sure about the contents of the patch and the test programs, I can ask him if you have any questions. Regards, Takahiro Kawashima, MPI development team, Fujitsu > WHAT:

Re: [OMPI devel] Datatype initialization bug?

2013-05-23 Thread Kawashima, Takahiro
George, Thanks. My colleague has verified your commit. This commit will make datatype code a bit simpler... Regards, Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > I used your second patch the one that remove the copy of the description in > the OMPI level (r28

Re: [OMPI devel] Datatype initialization bug?

2013-05-22 Thread Kawashima, Takahiro
It don't copy desc and OMPI desc points OPAL desc. I'm not sure this is a correct solution. The attached result-after.txt is the output of the attached show_ompi_datatype.c with my patch. I think this output is correct. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Tak

[OMPI devel] Datatype initialization bug?

2013-05-16 Thread KAWASHIMA Takahiro
redefined_elem_desc array? But having same 'type' value in OPAL datatypes and OMPI datatypes is allowed? Regards, KAWASHIMA Takahiro

Re: [OMPI devel] [OMPI svn-full] svn:open-mpi r27880 - trunk/ompi/request

2013-05-01 Thread KAWASHIMA Takahiro
George, As I wrote in the ticket a few minutes ago, your patch looks good and it passed my test. My previous patch didn't care about generalized requests so your patch is better. Thanks, Takahiro Kawashima, from my home > Takahiro, > > I went over this ticket and attach

Re: [OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF

2013-04-29 Thread KAWASHIMA Takahiro
eature and another for bug fixes, as described in my previous mail. Regards, KAWASHIMA Takahiro > Jeff, George, > > I've implemented George's idea for ticket #3123 "MPI-2.2: Ordering of > attribution deletion callbacks on MPI_COMM_SELF". See attached > delet

Re: [OMPI devel] RFC: opal_list iteration macros

2013-01-30 Thread KAWASHIMA Takahiro
I don't care the macro names. Either one is OK for me. Thanks, KAWASHIMA Takahiro > Hmm, maybe something like: > > OPAL_LIST_FOREACH, OPAL_LISTFOREACH_REV, OPAL_LIST_FOREACH_SAFE, > OPAL_LIST_FOREACH_REV_SAFE? > > -Nathan > > On Thu, Jan 31, 2013 at 12:36:2

Re: [OMPI devel] RFC: opal_list iteration macros

2013-01-30 Thread KAWASHIMA Takahiro
Hi, Agreed. But how about backward traversal in addition to forward traversal? e.g. OPAL_LIST_FOREACH_FW, OPAL_LIST_FOREACH_FW_SAFE, OPAL_LIST_FOREACH_BW, OPAL_LIST_FOREACH_BW_SAFE We sometimes search an item from the end of a list. Thanks, KAWASHIMA Takahiro > What: Add two new macros

Re: [OMPI devel] [OMPI svn-full] svn:open-mpi r27880 - trunk/ompi/request

2013-01-24 Thread Kawashima, Takahiro
Jeff, I've filed the ticket. https://svn.open-mpi.org/trac/ompi/ticket/3475 Thanks, Takahiro Kawashima, MPI development team, Fujitsu > Many thanks for the summary! > > Can you file tickets about this stuff against 1.7? Included your patches, > etc. > > These are pr

Re: [OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF

2013-01-24 Thread KAWASHIMA Takahiro
cket #3123, and other 7 latest changesets are for bug/typo-fixes. Regards, KAWASHIMA Takahiro > Jeff, > > OK. I'll try implementing George's idea and then you can compare which > one is simpler. > > Regards, > KAWASHIMA Takahiro > > > Not that I'

Re: [OMPI devel] [OMPI svn-full] svn:open-mpi r27880 - trunk/ompi/request

2013-01-22 Thread Kawashima, Takahiro
tus.c attached in my previous mail. Run with -n 2. http://www.open-mpi.org/community/lists/devel/2012/10/11555.php Regards, Takahiro Kawashima, MPI development team, Fujitsu > To be honest it was hanging in one of my repos for some time. If I'm not > mistaken it is somehow rela

Re: [OMPI devel] MPI-2.2 status #2223, #3127

2013-01-20 Thread Kawashima, Takahiro
Jeff, George, Thanks for your replies. I'll notify my colleagues of these mails. Please tell me (or write on the ticket) which repo to use for topo after you take a look. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Long story short. It is freshly forked from the OM

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2013-01-20 Thread Kawashima, Takahiro
I've confirmed. Thanks. Takahiro Kawashima, MPI development team, Fujitsu > Done -- thank you! > > On Jan 11, 2013, at 3:52 AM, "Kawashima, Takahiro" > wrote: > > > Hi Open MPI core members and Rayson, > > > > I've confirmed to the au

[OMPI devel] MPI-2.2 status #2223, #3127

2013-01-17 Thread Kawashima, Takahiro
/jsquyres/mpi22-c-complex Best regards, Takahiro Kawashima, MPI development team, Fujitsu

Re: [OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF

2013-01-17 Thread KAWASHIMA Takahiro
Jeff, OK. I'll try implementing George's idea and then you can compare which one is simpler. Regards, KAWASHIMA Takahiro > Not that I'm aware of; that would be great. > > Unlike George, however, I'm not concerned about converting to linear > operations for att

Re: [OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF

2013-01-17 Thread KAWASHIMA Takahiro
George, Your idea makes sense. Is anyone working on it? If not, I'll try. Regards, KAWASHIMA Takahiro > Takahiro, > > Thanks for the patch. I deplore the lost of the hash table in the attribute > management, as the potential of transforming all attributes operation to a &g

[OMPI devel] [patch] MPI-2.2: Ordering of attribution deletion callbacks on MPI_COMM_SELF

2013-01-16 Thread KAWASHIMA Takahiro
. If you like it, take in this patch. Though I'm a employee of a company, this is my independent and private work at my home. No intellectual property from my company. If needed, I'll sign to Individual Contributor License Agreement. Regards, KAWASHIMA Takahiro delete-attr-order.patch.gz Description: Binary data

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2013-01-11 Thread Kawashima, Takahiro
and bibtex reference. Best regards, Takahiro Kawashima, MPI development team, Fujitsu > Sorry for not replying sooner. > I'm taliking with the authors (they are not in this list) and > will request linking the PDF soon if they allowed. > > Takahiro Kawashima, > MPI development t

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2013-01-10 Thread Kawashima, Takahiro
Hi, Sorry for not replying sooner. I'm taliking with the authors (they are not in this list) and will request linking the PDF soon if they allowed. Takahiro Kawashima, MPI development team, Fujitsu > Our policy so far was that adding a paper to the list of publication on the > Open

Re: [OMPI devel] [patch] SEGV on processing unexpected messages

2012-10-18 Thread Kawashima, Takahiro
George, Brian, I also think my patch is icky. George's patch may be nicer. Thanks, Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > Nice catch. A nicer fix will be to check the type of the header, and copy the > header accordingly. Attached is a patch fo

[OMPI devel] [patch] SEGV on processing unexpected messages

2012-10-17 Thread Kawashima, Takahiro
to segs[0].seg_addr.pval. There may exist a smarter fix. Regards, Takahiro Kawashima, MPI development team, Fujitsu Index: ompi/mca/pml/ob1/pml_ob1_recvfrag.h === --- ompi/mca/pml/ob1/pml_ob1_recvfrag.h (revision 27446) +++ ompi/mca/

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-15 Thread Kawashima, Takahiro
dd if-statements for an inactive request in order to set a user-supplied status object to empty in ompi_request_default_wait etc. For least astonishment, I think A. is better. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > I fail to see the cases you

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-15 Thread Kawashima, Takahiro
ck in the next few hours. > > > > Sorry, I didn't notice the ticket 3218. > > Now I've confirmed your commit r27403. > > Your modification is better for my issue (3). > > > > With r27403, my patch for issue (1) and (2) needs modification. > > I'll re-send modified patch in a few hours. > > The updated patch is attached. > This patch addresses bugs (1) and (2) in my previous mail > and fixes some typos in comments. Regards, Takahiro Kawashima, MPI development team, Fujitsu

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-04 Thread Kawashima, Takahiro
27403, my patch for issue (1) and (2) needs modification. > I'll re-send modified patch in a few hours. The updated patch is attached. This patch addresses bugs (1) and (2) in my previous mail and fixes some typos in comments. Regards, Takahiro Kawashima, MPI development team, Fujitsu Inde

Re: [OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-04 Thread Kawashima, Takahiro
. My patch will clean > that up. I'll try to put it back in the next few hours. Sorry, I didn't notice the ticket 3218. Now I've confirmed your commit r27403. Your modification is better for my issue (3). With r27403, my patch for issue (1) and (2) needs modification. I'll re-send modified patch in a few hours. Regards, Takahiro Kawashima, MPI development team, Fujitsu

[OMPI devel] [patch] Invalid MPI_Status for null or inactive request

2012-10-04 Thread Kawashima, Takahiro
should use OMPI_STATUS_SET macro for all user-supplied MPI_Status objects. The attached patch is for Open MPI trunk and it also fixes some typos in comments. A program to reproduce bugs (1) and (2) is also attached. Regards, Takahiro Kawashima, MPI development team, Fujitsu Index: ompi/request

Re: [OMPI devel] [patch] MPI_Cancel should not cancel a request if it has a matched recv frag

2012-07-26 Thread Kawashima, Takahiro
George, Thanks for review and commit! I've confirmed your modification. Takahiro Kawashima, MPI development team, Fujitsu > Takahiro, > > Indeed we were way to lax on canceling the requests. I modified your patch to > correctly deal with the MEMCHECK macro (remove the cal

[OMPI devel] [patch] MPI_Cancel should not cancel a request if it has a matched recv frag

2012-07-26 Thread Kawashima, Takahiro
ect. Could anyone review it before committing? Regards, Takahiro Kawashima, MPI development team, Fujitsu #include #include #include /* rendezvous */ #define BUFSIZE1 (1024*1024) /* eager */ #define BUFSIZE2 (8) int main(int argc, char *argv[]) { int myrank, cancelled; void *b

Re: [OMPI devel] [patch] Bugs in mpi-f90-interfaces.h and its bridge implementation

2012-04-06 Thread Kawashima
. Regards, Takahiro Kawashima, MPI development team, Fujitsu > Jeffrey Squyres wrote: > > > > On Apr 3, 2012, at 10:56 PM, Kawashima wrote: > > > > > I and my coworkers checked mpi-f90-interfaces.h against MPI 2.2 standard > > > and found many bugs in it.

Re: [OMPI devel] [patch] Bugs in mpi-f90-interfaces.h and its bridge implementation

2012-04-04 Thread Kawashima
Hi Jeff, Jeffrey Squyres wrote: > > On Apr 3, 2012, at 10:56 PM, Kawashima wrote: > > > I and my coworkers checked mpi-f90-interfaces.h against MPI 2.2 standard > > and found many bugs in it. Attached patches fix them for trunk. > > Though some of them are trivial

[OMPI devel] [patch] Bugs in mpi-f90-interfaces.h and its bridge implementation

2012-04-03 Thread Kawashima
| inout MPI_Mrecv| status| out | inout I also attached a patch mpi-f90-interfaces.all-in-one.patch that includes all 6 patches described above. Regards, Takahiro Kawashima, MPI development team, Fujitsu mpi-f90-interfaces.type-mismatch.patch Description: Binary da

[OMPI devel] [patch] One-sided communication with derived datatype fails on sparc64

2012-01-12 Thread Kawashima
_indexed_block Regards, Takahiro Kawashima, MPI development team, Fujitsu osc-derived.trunk.patch Description: Binary data osc-derived.v1.4.patch Description: Binary data osc-hvector.c Description: Binary data

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-07-03 Thread Kawashima
B1_HDR_TYPE_MATCH)); > > > >if (rc == OMPI_SUCCESS) { > >/* NOTE this is not thread safe */ > >OPAL_THREAD_ADD32(&proc->send_sequence, 1); > >} Takahiro Kawashima, MPI development team, Fujitsu > Does your llp sed path order MPI mat

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-06-29 Thread Kawashima
onvertor, we restrict datatype that can go into the LLP. Of course, we cannot use LLP on MPI_Isend. > Note, too, that the coll modules can be laid overtop of each other -- e.g., > if you only implement barrier (and some others) in tofu coll, then you can > supply NULL for the other fun

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-06-29 Thread Kawashima
also > impact major decisions the open-source community is taking. Tofu communication model is simular to that of IB RDMA. Actually, we use source code of openib BTL as a reference. We'll consider contribution of some code, and join the discussion. Regards, Takahiro Kawashima, MPI development team, Fujitsu

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-06-29 Thread Kawashima
e algorithm implementations also bypass PML/BML/BTL to eliminate protocol and software overhead. To achieve above, we created 'tofu COMMON', like sm (ompi/mca/common/sm/). Is there interesting one? Though our BTL and COLL are quite interconnect-specific, LLP may be contributed in the f

Re: [OMPI devel] "Open MPI"-based MPI library used by K computer

2011-06-27 Thread Takahiro Kawashima
Dear Open MPI community, I'm a member of MPI library development team in Fujitsu. Shinji Sumimoto, whose name appears in Jeff's blog, is one of our bosses. As Rayson and Jeff noted, K computer, world's most powerful HPC system developed by RIKEN and Fujitsu, utilizes Open MPI as a base of its MPI

Re: [OMPI devel] Thread safety levels

2010-05-10 Thread Kawashima
how about MPICH2 or other MPI implementation? Does anyone know? Regards, Kawashima

Re: [OMPI devel] Thread safety levels

2010-05-10 Thread Kawashima
nction, as suggested by Sylvain. > > I can't comment on that, though I doubt it's quite that simple. There's > a big difference between MPI_THREAD_FUNNELED and MPI_THREAD_SERIALIZED > in implementation impact. I can't imagine difference between those two, unless MPI library uses something thread local. Ah, there may be something on OSes that I don't know Anyway, thanks for your comment! Regards, Kawashima

Re: [OMPI devel] Thread safety levels

2010-05-10 Thread Kawashima
ead to performance penalty. Regards, Kawashima > Hi list, > > I'm currently playing with thread levels in Open MPI and I'm quite > surprised by the current code. > > First, the C interface : > at ompi/mpi/c/init_thread.c:56 we have : > #if OPAL_ENABLE_MPI_THR