On Wed, Nov 26, 2014 at 8:34 AM, Garth N. Wells <[email protected]> wrote:

>
>
> On Tue, 25 Nov, 2014 at 9:48 PM, Johan Hake <[email protected]> wrote:
>
>> Hello!
>>
>> I just pushed some fixes to the jit interface of DOLFIN. Now one can jit
>> on different mpi groups.
>>
>
> Nice.
>
>  Previously jiting was only done on rank 1 of the mpi_comm_world. Now it
>> is done on rank 1 of any passed group communicator.
>>
>
> Do you mean rank 0?


​Yes, of course.​



>  There is no demo atm showing this but a test has been added:
>>
>>   test/unit/python/jit/test_jit_with_mpi_groups.py
>>
>> Here an expression, a subdomain, and a form is constructed on different
>> ranks using group. It is somewhat tedious as one need to initialize PETSc
>> with the same group, otherwise PETSc will deadlock during initialization
>> (the moment a PETSc la object is constructed).
>>
>
> This is ok. It's arguably a design flaw that we don't make the user handle
> MPI initialisation manually.


​Sure, it is just somewhat tedious. You cannot start your typical script
with importing dolfin.​

 The procedure in Python for this is:
>>
>> 1) Construct mpi groups using mpi4py
>> 2) Initalize petscy4py using the groups
>> 3) Wrap groups to petsc4py comm (dolfin only support petsc4py not mpi4py)
>> 4) import dolfin
>> 5) Do group specific stuff:
>>    a) Function and forms no change needed as communicator
>>       is passed via mesh
>>    b) domain = CompiledSubDomain("...", mpi_comm=group_comm)
>>    c) e = Expression("...", mpi_comm=group_comm)
>>
>
> It's not so clear whether passing the communicator means that the
> Expression is only defined/available on group_comm, or if group_comm is
> simply to control who does the JIT. Could you clarify this?


My knowledge is not that good in MPI. I have only tried to access (and
construct) the Expression on ranks included in that group. Also when I
tried construct one using a group communicator on a rank that is not
included in the group, I got an when calling MPI_size on it. There is
probably a perfectly reasonable explaination to this. ​​

 Please try it out and report any sharp edges. A demo would also be fun to
>> include :)
>>
>
> We could run tests on different communicators to speed them up on machines
> with high core counts!
>

True!

Johan​



> Garth
>
>
>  Johan
>>
>
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to