Dear Kostas

I am looking inside the source code.
> if (generic_expressions.size()) {...}
Sorry it looks complex for me.

FYI. I found that MPI process 1 and 2 is different in the following line.
>    if (iter.finished(crit)) {
This is in the "Newton_with_step_control" function in
getfem_model_solvers.h.

"crit" is calculated by rit = res / approx_eln and res and approx_eln is ...

$ mpirun -n 1 python demo_parallel_laplacian.py
res=1.31449e-11
approx_eln=6.10757
crit=2.15222e-12

$ mpirun -n 2 python demo_parallel_laplacian.py
res=6.02926
approx_eln=12.2151
crit=0.493588

res=0.135744
approx_eln=12.2151
crit=0.0111128

I am now trying to understand what is the correct residual value of
 Newton(-Raphson) algorithm.
I will be glad if you have an opinion.

Best Regards Tetsuo
2021年5月11日(火) 19:28 Tetsuo Koyama <tkoyama...@gmail.com>:

> Dear Kostas
>
> > The relevant code is in the void model::assembly function in
> getfem_models.cc. The relevant code assembling the term you add with
> md.add_nonlinear_term(..) must be executed inside the if condition
> >
> > if (generic_expressions.size()) {...}
> > You can have a look there and ask for further help if it looks too
> complex. You should also check if the test works when you run it with
> md.add_nonlinear_term but setting the number of MPI processes to one.
>
> Thanks. I will check it. And the following command completed successfully..
>
> $ mpirun -n 1 python demo_parallel_laplacian.py
>
> So all we have to check is compare -n 1 with -n2 .
>
> Best regards Tetsuo
>
> 2021年5月11日(火) 18:44 Konstantinos Poulios <logar...@googlemail.com>:
>
>> Dear Tetsuo,
>>
>> The relevant code is in the void model::assembly function in
>> getfem_models.cc. The relevant code assembling the term you add with
>> md.add_nonlinear_term(..) must be executed inside the if condition
>>
>> if (generic_expressions.size()) {...}
>>
>> You can have a look there and ask for further help if it looks too
>> complex. You should also check if the test works when you run it with
>> md.add_nonlinear_term but setting the number of MPI processes to one.
>>
>> BR
>> Kostas
>>
>>
>> On Tue, May 11, 2021 at 10:44 AM Tetsuo Koyama <tkoyama...@gmail.com>
>> wrote:
>>
>>> Dear Kostas
>>>
>>> Thank you for your reply.
>>>
>>> > Interesting. In order to isolate the issue, can you also check with
>>> > md.add_linear_term(..)
>>> > ?
>>> It ends when using md.add_linear_term(..).
>>> It seems that it is a problem of md.add_nonlinear_term(..).
>>> Is there a point which I can check?
>>>
>>> Best regards Tetsuo.
>>>
>>> 2021年5月11日(火) 17:19 Konstantinos Poulios <logar...@googlemail.com>:
>>>
>>>> Dear Tetsuo,
>>>>
>>>> Interesting. In order to isolate the issue, can you also check with
>>>> md.add_linear_term(..)
>>>> ?
>>>>
>>>> Best regards
>>>> Kostas
>>>>
>>>> On Tue, May 11, 2021 at 12:22 AM Tetsuo Koyama <tkoyama...@gmail.com>
>>>> wrote:
>>>>
>>>>> Dear GetFEM community
>>>>>
>>>>> I am running MPI Parallelization of GetFEM.The running command is
>>>>>
>>>>> $ git clone https://git.savannah.nongnu.org/git/getfem.git
>>>>> $ cd getfem
>>>>> $ bash autogen.sh
>>>>> $ ./configure --with-pic --enable-paralevel=2
>>>>> $ make
>>>>> $ make install
>>>>> $ mpirun -n 2 python demo_parallel_laplacian.py
>>>>>
>>>>> The python script ends correctly. But when I changed the following
>>>>> linear term to nonlinear term the script did not end.
>>>>>
>>>>> -md.add_Laplacian_brick(mim, 'u')
>>>>> +md.add_nonlinear_term(mim, "Grad_u.Grad_Test_u")
>>>>>
>>>>> Do you know the reason?
>>>>> Best regards Tetsuo
>>>>>
>>>>

Reply via email to