On Wed, Nov 26, 2014 at 10:13 AM, Martin Sandve Alnæs <[email protected]>
wrote:

> Chris: when jit is called on a subset of processors, the others obviously
> cannot participate. The check for rank 0 within the group is exactly to
> stop all the other participating processors from writing to disk
> simultaneously. So if the instant cache is in a shared directory, there's
> no more disk usage here.
>
> Of course if you have two groups doing exactly the same operations, then
> rank 0 from each group will race for copying into the instant cache. In
> that case we don't have mpi communication between the racing processors
> because they are on different groups, and we'll need the file locking in
> instant to work. Which I guess is still a problem with NFS...
>

​We do have support for a NFS safe file lock​ing using flufl.lock, but it
looks like this is only a solution for some as far as I remember clusters.
As far I remember it uses some particular file flags, and the usage of
these can be turned of by administrators for some file systems.

Johan



> Martin
> 26. nov. 2014 09:57 skrev "Johan Hake" <[email protected]>:
>
>>
>>
>> On Wed, Nov 26, 2014 at 9:55 AM, Chris Richardson <[email protected]>
>> wrote:
>>
>>> On 25/11/2014 21:48, Johan Hake wrote:
>>>
>>>> Hello!
>>>>
>>>> I just pushed some fixes to the jit interface of DOLFIN. Now one can
>>>> jit on different mpi groups. Previously jiting was only done on rank 1
>>>> of the mpi_comm_world. Now it is done on rank 1 of any passed group
>>>> communicator.
>>>>
>>>
>>> Maybe I'm missing something here, but could you explain what is the
>>> advantage of doing it this way?
>>> If you have several processes JITing the same code, where do they save
>>> it in the filesystem?
>>> If they all share the same filesystem, surely this will take up more
>>> resources, as more processes
>>> try to write at the same time?
>>>
>>
>> ​That is just an unintended (necessary?) side effect.​
>>
>> Or is it intended for cases where the Expression is different on
>>> different MPI groups?
>>
>>
>> ​Yes.
>>
>> Johan​
>>
>>
>>
>>>
>>>
>>> Chris
>>>
>>
>>
>> _______________________________________________
>> fenics mailing list
>> [email protected]
>> http://fenicsproject.org/mailman/listinfo/fenics
>>
>>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to