s not receive Java ones recently. So
users are few, if any.
We Fujitsu don't mind dropping Java binding in Open MPI v5.0.x.
Thanks,
Takahiro Kawashima,
Fujitsu
> During a planning meeting for Open MPI v5.0.0 today, the question came up: is
> anyone using the Open MPI Java bindings?
>
&
rch64
libraries and x86_64 opal_wrapper and writing wrapper-data.txt allows cross
compiling AArch64 MPI programs on x86_64.
Thanks,
Takahiro Kawashima,
Fujitsu
> Jeff,
>
> Cross compilation is the recommended way on Fugaku.
> In all fairness, even if Fujitsu MPI is based on Open MP
> > As other people said, Fujitsu MPI used in K is based on old
> > Open MPI (v1.6.3 with bug fixes).
>
> I guess the obvious question is will the vanilla Open-MPI work on K?
Unfortunately no. Support of Tofu and Fujitsu resource manager
are not included in Open MPI.
Takah
used in K is based on old
Open MPI (v1.6.3 with bug fixes). We don't have a plan to
update it to newer version because it is in a maintenance
phase regarding system softwares. At first glance, I also
suspect the cost of multiple allreduce.
Takahiro Kawashima,
MPI development team,
Fujitsu
>
Paul,
Thank you.
I created an issue and PRs (v2.x and v2.0.x).
https://github.com/open-mpi/ompi/issues/4122
https://github.com/open-mpi/ompi/pull/4123
https://github.com/open-mpi/ompi/pull/4124
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> This is a D
d984b4b patch? I cannot test it because I cannot update glibc.
If it is fine, I'll create a PR for v2.x branch.
https://github.com/open-mpi/ompi/commit/d984b4b
Takahiro Kawashima,
MPI development team,
Fujitsu
> Two things to note:
>
> 1) This is *NOT* present in 3.0.0rc2, t
It might be related to https://github.com/open-mpi/ompi/issues/3697 .
I added a comment to the issue.
Takahiro Kawashima,
Fujitsu
> On a PPC64LE w/ gcc-7.1.0 I see opal_fifo hang instead of failing.
>
> -Paul
>
> On Mon, Jul 3, 2017 at 4:39 PM, Paul Hargrove wrote:
>
> &
I filed a PR against v1.10.7 though v1.10.7 may not be released.
https://github.com/open-mpi/ompi/pull/3276
I'm not aware of v2.1.x issue, sorry. Other developer may be
able to answer.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Bullseye!
>
> Thank you, Takahiro,
MPI_COMM_SPAWN,
MPI_COMM_SPAWN_MULTIPLE, MPI_COMM_ACCEPT, and MPI_COMM_CONNECT.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Dear Developers,
>
> This is an old problem, which I described in an email to the users list
> in 2015, but I continue to struggle with it. In short, MPI
Hi,
I created a pull request to add the persistent collective
communication request feature to Open MPI. Though it's
incomplete and will not be merged into Open MPI soon,
you can play your collective algorithms based on my work.
https://github.com/open-mpi/ompi/pull/2758
Takahiro Kawa
Gilles, Jeff,
In Open MPI 1.6 days, MPI_ARGVS_NULL and MPI_STATUSES_IGNORE
were defined as double precision and MPI_Comm_spawn_multiple
and MPI_Waitall etc. interfaces had two subroutines each.
https://github.com/open-mpi/ompi-release/blob/v1.6/ompi/include/mpif-common.h#L148
https://github
ram” for OpenMPI code base that shows
> existing classes and dependencies/associations. Are there any available tools
> to extract and visualize this information.
Thanks,
KAWASHIMA Takahiro
> I just checked MPICH 3.2, and they *do* include MPI_SIZEOF interfaces for
> CHARACTER and LOGICAL, but they are missing many of the other MPI_SIZEOF
> interfaces that we have in OMPI. Meaning: OMPI and MPICH already diverge
> wildly on MPI_SIZEOF. :-\
And OMPI 1.6 also had MPI_SIZEOF interf
Gilles,
I see. Thanks!
Takahiro Kawashima,
MPI development team,
Fujitsu
> Kawashima-san,
>
> we always duplicate the communicator, and use the CID of the duplicated
> communicator, so bottom line,
> there cannot be more than one window per communicator.
>
> i will dou
.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Hmm, I think you are correct. There may be instances where two different
> local processes may use the same CID for different communicators. It
> should be sufficient to add the PID of the current process to the
> filename to
`configure && make && make install && make check` and
running some sample MPI programs succeeded with 1.10.1rc3
on my SPARC-V9/Linux/GCC machine (Fujitsu PRIMEHPC FX10).
No @SET_MAKE@ appears in any Makefiles, of course.
> > For the first time I was also able to (attempt to) test SPARC64 via QEMU
Brice,
I'm a developer of Fujitsu MPI for K computer and Fujitsu
PRIMEHPC FX10/FX100 (SPARC-based CPU).
Though I'm not familiar with the hwloc code and didn't know
the issue reported by Gilles, I also would be able to help
you to fix the issue.
Takahiro Kawashima,
MPI developmen
orm MPI_Buffer_detach.
The declaration of MPI_Win_detach is not changed since
the one-sided code was merged into the trunk at commit
49d938de (svn r30816).
Regards,
Takahiro Kawashima
> iirc, the MPI_Win_detach discrepancy with the standard is intentional in
> fortran 2008,
> there is a comment i
Oh, I also noticed it yesterday and was about to report it.
And one more, the base parameter of MPI_Win_detach.
Regards,
Takahiro Kawashima
> Dear OpenMPI developers,
>
> I noticed a bug in the definition of the 3 MPI-3 RMA functions
> MPI_Compare_and_swap, MPI_Fetch_and_op and MPI
Hi folks,
`configure && make && make install && make test` and
running some sample MPI programs succeeded with 1.10.0rc1
on my SPARC-V9/Linux/GCC machine (Fujitsu PRIMEHPC FX10).
Takahiro Kawashima,
MPI development team,
Fujitsu
> Hi folks
>
> Now that 1.8.7 i
formation that may be useful for users and developers.
Not so verbose. Output only on initialization or
object creation etc.
DEBUG:
Information that is useful only for developers.
Not so verbose. Output once per MPI routine call.
TRACE:
Information that is useful only for developers.
V
sufficient. But an easy implementation is using a barrier.
Thanks,
Takahiro Kawashima,
> Kawashima-san,
>
> i am confused ...
>
> as you wrote :
>
> > In the MPI_MODE_NOPRECEDE case, a barrier is not necessary
> > in the MPI implementation to end access/exposur
Hi Gilles, Nathan,
No, my conclusion is that the MPI program does not need a MPI_Barrier
but MPI implementations need some synchronizations.
Thanks,
Takahiro Kawashima,
> Kawashima-san,
>
> Nathan reached the same conclusion (see the github issue) and i fixed
> the test
> by ma
wait rank 1's MPI_WIN_FENCE.)
I think this is the intent of the sentence in the MPI standard
cited above.
Thanks,
Takahiro Kawashima
> Hi Rolf,
>
> yes, same issue ...
>
> i attached a patch to the github issue ( the issue might be in the test).
>
> From th
Yes, Fujitsu MPI is running on sparcv9-compatible CPU.
Though we currently use only stable-series (v1.6, v1.8),
they work fine.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Nathan,
>
> Fujitsu MPI is openmpi based and is running on their sparcv9 like proc.
>
> Chee
Thanks!
> Takahiro,
>
> Sorry for the delay in answering. Thanks for the bug report and the patch.
> I applied you patch, and added some tougher tests to make sure we catch
> similar issues in the future.
>
> Thanks,
> George.
>
>
> On Mon, Sep 29, 2014 at 8
ather_inter and iallgather_intra.
The modification of iallgather_intra is just for symmetry with
iallgather_inter. Users guarantee the consistency of send/recv.
Both trunk and v1.8 branch have this issue.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
#include
#include
#include "mpi.h"
+ count;
+pStack[1].disp = count;
}
pStack[1].index= 0; /* useless */
Best regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
/* np=2 */
#include
#include
#include
struct structure {
double not_transfered;
double transfered_1;
double transfered_2;
just FYI:
configure && make && make install && make test
succeeded on my SPARC64/Linux/GCC (both enable-debug=yes and no).
Takahiro Kawashima,
MPI development team,
Fujitsu
> Usual place:
>
> http://www.open-mpi.org/software/ompi/v1.8/
>
> Please
Hi Siegmar, Ralph,
I forgot to follow the previous report, sorry.
The patch I suggested is not included in Open MPI 1.8.2.
The backtrace Siegmar reported points the problem that I fixed
in the patch.
http://www.open-mpi.org/community/lists/users/2014/08/24968.php
Siegmar:
Could you try my patc
ch to fix it in v1.8.
My fix doesn't call dss but uses memcpy. I have confirmed it on
SPARC64/Linux.
Sorry to response so late.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Siegmar, Ralph,
>
> I'm sorry to response so late since last week.
>
> Ralph fixed
the custom patch just now.
Wait wait a minute please.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Hi,
>
> thank you very much to everybody who tried to solve my bus
> error problem on Solaris 10 Sparc. I thought that you found
> and fixed it, so that I installed openmpi-1.8.2r
Gilles,
I applied your patch to v1.8 and it run successfully
on my SPARC machines.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Kawashima-san and all,
>
> Here is attached a one off patch for v1.8.
> /* it does not use the __attribute__ modifier that might not be
> s
restarts;
-orte_process_name_t proc, dmn;
+orte_process_name_t proc __attribute__((__aligned__(8))), dmn;
char *hostname;
uint8_t flag;
opal_buffer_t *bptr;
Takahiro Kawashima,
MPI development team,
Fujitsu
> Kawashima-san,
>
> This is interesting :-)
>
>
opal.local.ldr",data=(void *) 0x07fede74,type=15:'\017') at line 252
in db_hash.c
I want to dig this issue, but unfortunately I have no time today.
My SPARC machines stop one hour later for the maintenance...
Takahiro Kawashima,
MPI development team,
Fujitsu
> I have an
MPI v1.8 branch r32447 (latest)
configure --enable-debug
SPARC-V9 (Fujitsu SPARC64 IXfx)
Linux (custom)
gcc 4.2.4
I could not reproduce it with Open MPI trunk nor with Fujitsu compiler.
Can this information help?
Takahiro Kawashima,
MPI development team,
Fujitsu
> Hi,
>
> I'
-compiling
environment. They all passed correctly.
P.S.
I cannot reply until the next week if you request me something
because it's COB in Japan now, sorry.
Takahiro Kawashima,
MPI development team,
Fujitsu
> In case someone else want to play with the new atomics here is the most
> up-to-
flag.
Regards,
KAWASHIMA Takahiro
> This is odd. The variable in question is registered by the MCA itself. I
> will take a look and see if I can determine why it isn't being
> deregistered correctly when the rest of the component's parameters are.
>
> -Nathan
>
> On W
with_wrapper_cxxflags=-g
with_wrapper_fflags=-g
with_wrapper_fcflags=-g
Regards,
KAWASHIMA Takahiro
> The problem is the code in question does not check the return code of
> MPI_T_cvar_handle_alloc . We are returning an error and they still try
> to use the handle (which is stale). Uncom
ned to kernel, and abnormal values are printed
if not yet.
So this SEGV doesn't occur if I configure Open MPI with
--disable-dlopen option. I think it's the reason why Nathan
doesn't see this error.
Regards,
KAWASHIMA Takahiro
7;B.3 Changes from Version 2.0 to Version 2.1'
(page 766) in MPI-3.0.
Though my patch is for OMPI trunk, I want to see these
corrections in 1.8 series.
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: ompi/mpi/c/mes
which are
meaningless for write(2) system call but might cause a similar
problem.
What do you think about this patch?
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: opal/mca/backtrace/backtrace.h
===
--- opal/mca/back
It is a bug in the test program, test/datatype/ddt_raw.c, and it was
fixed at r24328 in trunk.
https://svn.open-mpi.org/trac/ompi/changeset/24328
I've confirmed the failure occurs with plain v1.6.5 and it doesn't
occur with patched v1.6.5.
Thanks,
KAWASHIMA Takahiro
> Not su
Thanks!
Takahiro Kawashima,
MPI development team,
Fujitsu
> Pushed in r29187.
>
> George.
>
>
> On Sep 17, 2013, at 12:03 , "Kawashima, Takahiro"
> wrote:
>
> > George,
> >
> > Copyright-added patch is attached.
> > I don't
d the contribution agreement.
I must talk with the legal department again to sign it, sigh
This patch is very trivial and so no issues will arise.
Thanks,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> Good catches. It's absolutely amazing that some of these errors la
ecvbuf + rdispls[i] · extent(recvtype),
recvcounts[i], recvtype, i, ...).
I attached his patch (alltoall-inplace.patch) to fix these three bugs.
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: ompi/mca/coll/self/coll_self_allt
or recreating
datatype", or "received packet for Window with unknown type",
if you use MPI_UB in OSC, like the attached program osc_ub.c.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: ompi/d
George,
Thanks. I've confirmed your patch.
I wrote a simple program to test your patch and no problems are found.
The test program is attached to this mail.
Regards,
KAWASHIMA Takahiro
> Takahiro,
>
> Please find below another patch, this time hopefully fixing all issues. The
George,
A improved patch is attached. Latter half is same as your patch.
But again, I'm not sure this is a correct solution.
It works correctly for my attached put_dup_type_3.c.
Run as "mpiexec -n 1 ./put_dup_type_3".
It will print seven OKs if succeeded.
Regards,
KAWASHIMA Tak
No. My patch doesn't work for a more simple case,
just a duplicate of MPI_INT.
Datatype is too complex for me ...
Regards,
KAWASHIMA Takahiro
> George,
>
> Thanks. But no, your patch does not work correctly.
>
> The assertion failure disappeared by your patch but the v
t;total_pack_size = 0;
break;
case MPI_COMBINER_CONTIGUOUS:
This patch in addition to your patch works correctly for my program.
But I'm not sure this is a correct solution.
Regards,
KAWASHIMA Takahiro
> Takahiro,
>
> Nice catch. That particular code was an over-opt
types
and the calculation of total_pack_size is also involved. It seems not
so simple.
Regards,
KAWASHIMA Takahiro
#include
#include
#include
#define PRINT_ARGS
#ifdef PRINT_ARGS
/* defined in ompi/datatype/ompi_datatype_args.c */
extern int32_t ompi_datatype_print_args(const struct ompi_dat
attached. test_1 and test_2 can run
with nprocs=5, and test_3 and test_4 can run with nprocs>=3.
Though I'm not sure about the contents of the patch and the test
programs, I can ask him if you have any questions.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> WHAT:
George,
Thanks. My colleague has verified your commit.
This commit will make datatype code a bit simpler...
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> I used your second patch the one that remove the copy of the description in
> the OMPI level (r28
It don't copy desc and OMPI desc points OPAL desc.
I'm not sure this is a correct solution.
The attached result-after.txt is the output of the attached
show_ompi_datatype.c with my patch. I think this output is
correct.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Tak
redefined_elem_desc
array? But having same 'type' value in OPAL datatypes and OMPI
datatypes is allowed?
Regards,
KAWASHIMA Takahiro
George,
As I wrote in the ticket a few minutes ago, your patch looks good and
it passed my test. My previous patch didn't care about generalized
requests so your patch is better.
Thanks,
Takahiro Kawashima,
from my home
> Takahiro,
>
> I went over this ticket and attach
eature and another for bug fixes, as described in
my previous mail.
Regards,
KAWASHIMA Takahiro
> Jeff, George,
>
> I've implemented George's idea for ticket #3123 "MPI-2.2: Ordering of
> attribution deletion callbacks on MPI_COMM_SELF". See attached
> delet
I don't care the macro names. Either one is OK for me.
Thanks,
KAWASHIMA Takahiro
> Hmm, maybe something like:
>
> OPAL_LIST_FOREACH, OPAL_LISTFOREACH_REV, OPAL_LIST_FOREACH_SAFE,
> OPAL_LIST_FOREACH_REV_SAFE?
>
> -Nathan
>
> On Thu, Jan 31, 2013 at 12:36:2
Hi,
Agreed.
But how about backward traversal in addition to forward traversal?
e.g. OPAL_LIST_FOREACH_FW, OPAL_LIST_FOREACH_FW_SAFE,
OPAL_LIST_FOREACH_BW, OPAL_LIST_FOREACH_BW_SAFE
We sometimes search an item from the end of a list.
Thanks,
KAWASHIMA Takahiro
> What: Add two new macros
Jeff,
I've filed the ticket.
https://svn.open-mpi.org/trac/ompi/ticket/3475
Thanks,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Many thanks for the summary!
>
> Can you file tickets about this stuff against 1.7? Included your patches,
> etc.
>
> These are pr
cket #3123,
and other 7 latest changesets are for bug/typo-fixes.
Regards,
KAWASHIMA Takahiro
> Jeff,
>
> OK. I'll try implementing George's idea and then you can compare which
> one is simpler.
>
> Regards,
> KAWASHIMA Takahiro
>
> > Not that I'
tus.c attached in
my previous mail. Run with -n 2.
http://www.open-mpi.org/community/lists/devel/2012/10/11555.php
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> To be honest it was hanging in one of my repos for some time. If I'm not
> mistaken it is somehow rela
Jeff, George,
Thanks for your replies. I'll notify my colleagues of these mails.
Please tell me (or write on the ticket) which repo to use for topo
after you take a look.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Long story short. It is freshly forked from the OM
I've confirmed. Thanks.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Done -- thank you!
>
> On Jan 11, 2013, at 3:52 AM, "Kawashima, Takahiro"
> wrote:
>
> > Hi Open MPI core members and Rayson,
> >
> > I've confirmed to the au
/jsquyres/mpi22-c-complex
Best regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
Jeff,
OK. I'll try implementing George's idea and then you can compare which
one is simpler.
Regards,
KAWASHIMA Takahiro
> Not that I'm aware of; that would be great.
>
> Unlike George, however, I'm not concerned about converting to linear
> operations for att
George,
Your idea makes sense.
Is anyone working on it? If not, I'll try.
Regards,
KAWASHIMA Takahiro
> Takahiro,
>
> Thanks for the patch. I deplore the lost of the hash table in the attribute
> management, as the potential of transforming all attributes operation to a
&g
. If you like it, take in
this patch.
Though I'm a employee of a company, this is my independent and private
work at my home. No intellectual property from my company. If needed,
I'll sign to Individual Contributor License Agreement.
Regards,
KAWASHIMA Takahiro
delete-attr-order.patch.gz
Description: Binary data
and bibtex reference.
Best regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Sorry for not replying sooner.
> I'm taliking with the authors (they are not in this list) and
> will request linking the PDF soon if they allowed.
>
> Takahiro Kawashima,
> MPI development t
Hi,
Sorry for not replying sooner.
I'm taliking with the authors (they are not in this list) and
will request linking the PDF soon if they allowed.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Our policy so far was that adding a paper to the list of publication on the
> Open
George, Brian,
I also think my patch is icky. George's patch may be nicer.
Thanks,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> Nice catch. A nicer fix will be to check the type of the header, and copy the
> header accordingly. Attached is a patch fo
to segs[0].seg_addr.pval. There may exist a smarter fix.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: ompi/mca/pml/ob1/pml_ob1_recvfrag.h
===
--- ompi/mca/pml/ob1/pml_ob1_recvfrag.h (revision 27446)
+++ ompi/mca/
dd if-statements for an inactive request in order to set
a user-supplied status object to empty in ompi_request_default_wait
etc.
For least astonishment, I think A. is better.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> I fail to see the cases you
ck in the next few hours.
> >
> > Sorry, I didn't notice the ticket 3218.
> > Now I've confirmed your commit r27403.
> > Your modification is better for my issue (3).
> >
> > With r27403, my patch for issue (1) and (2) needs modification.
> > I'll re-send modified patch in a few hours.
>
> The updated patch is attached.
> This patch addresses bugs (1) and (2) in my previous mail
> and fixes some typos in comments.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
27403, my patch for issue (1) and (2) needs modification.
> I'll re-send modified patch in a few hours.
The updated patch is attached.
This patch addresses bugs (1) and (2) in my previous mail
and fixes some typos in comments.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
Inde
. My patch will clean
> that up. I'll try to put it back in the next few hours.
Sorry, I didn't notice the ticket 3218.
Now I've confirmed your commit r27403.
Your modification is better for my issue (3).
With r27403, my patch for issue (1) and (2) needs modification.
I'll re-send modified patch in a few hours.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
should use OMPI_STATUS_SET macro for all user-supplied
MPI_Status objects.
The attached patch is for Open MPI trunk and it also fixes some
typos in comments. A program to reproduce bugs (1) and (2) is
also attached.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
Index: ompi/request
George,
Thanks for review and commit!
I've confirmed your modification.
Takahiro Kawashima,
MPI development team,
Fujitsu
> Takahiro,
>
> Indeed we were way to lax on canceling the requests. I modified your patch to
> correctly deal with the MEMCHECK macro (remove the cal
ect. Could anyone review it
before committing?
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
#include
#include
#include
/* rendezvous */
#define BUFSIZE1 (1024*1024)
/* eager */
#define BUFSIZE2 (8)
int main(int argc, char *argv[])
{
int myrank, cancelled;
void *b
.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
> Jeffrey Squyres wrote:
> >
> > On Apr 3, 2012, at 10:56 PM, Kawashima wrote:
> >
> > > I and my coworkers checked mpi-f90-interfaces.h against MPI 2.2 standard
> > > and found many bugs in it.
Hi Jeff,
Jeffrey Squyres wrote:
>
> On Apr 3, 2012, at 10:56 PM, Kawashima wrote:
>
> > I and my coworkers checked mpi-f90-interfaces.h against MPI 2.2 standard
> > and found many bugs in it. Attached patches fix them for trunk.
> > Though some of them are trivial
| inout
MPI_Mrecv| status| out | inout
I also attached a patch mpi-f90-interfaces.all-in-one.patch that includes
all 6 patches described above.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
mpi-f90-interfaces.type-mismatch.patch
Description: Binary da
_indexed_block
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
osc-derived.trunk.patch
Description: Binary data
osc-derived.v1.4.patch
Description: Binary data
osc-hvector.c
Description: Binary data
B1_HDR_TYPE_MATCH));
> >
> >if (rc == OMPI_SUCCESS) {
> >/* NOTE this is not thread safe */
> >OPAL_THREAD_ADD32(&proc->send_sequence, 1);
> >}
Takahiro Kawashima,
MPI development team,
Fujitsu
> Does your llp sed path order MPI mat
onvertor,
we restrict datatype that can go into the LLP.
Of course, we cannot use LLP on MPI_Isend.
> Note, too, that the coll modules can be laid overtop of each other -- e.g.,
> if you only implement barrier (and some others) in tofu coll, then you can
> supply NULL for the other fun
also
> impact major decisions the open-source community is taking.
Tofu communication model is simular to that of IB RDMA.
Actually, we use source code of openib BTL as a reference.
We'll consider contribution of some code, and join the discussion.
Regards,
Takahiro Kawashima,
MPI development team,
Fujitsu
e algorithm
implementations also bypass PML/BML/BTL to eliminate protocol and software
overhead.
To achieve above, we created 'tofu COMMON', like sm (ompi/mca/common/sm/).
Is there interesting one?
Though our BTL and COLL are quite interconnect-specific, LLP may be
contributed in the f
Dear Open MPI community,
I'm a member of MPI library development team in Fujitsu. Shinji
Sumimoto, whose name appears in Jeff's blog, is one of our bosses.
As Rayson and Jeff noted, K computer, world's most powerful HPC system
developed by RIKEN and Fujitsu, utilizes Open MPI as a base of its MPI
how about MPICH2 or other MPI implementation?
Does anyone know?
Regards,
Kawashima
nction, as suggested by Sylvain.
>
> I can't comment on that, though I doubt it's quite that simple. There's
> a big difference between MPI_THREAD_FUNNELED and MPI_THREAD_SERIALIZED
> in implementation impact.
I can't imagine difference between those two, unless MPI library uses
something thread local. Ah, there may be something on OSes that I don't
know
Anyway, thanks for your comment!
Regards,
Kawashima
ead to performance penalty.
Regards,
Kawashima
> Hi list,
>
> I'm currently playing with thread levels in Open MPI and I'm quite
> surprised by the current code.
>
> First, the C interface :
> at ompi/mpi/c/init_thread.c:56 we have :
> #if OPAL_ENABLE_MPI_THR
92 matches
Mail list logo