Hi Andrea,

I checked your code and the only thing is, that if you generate a hyper_cube, 
there is only one cell and nothing to distribute, so one processor holds this 
cell and all dof, nevertheless SolutionTransfer works and gives the correct 
result.
The same happens if you refine_global(1), you have four cells, but there is 
nothing to distribute taking into account ghost- and artificial cells, so the 
same happens. If you refine_global(2), you have 16 cells and p4est distributes 
them over the processors. I run your code only on 2 cores, but everything 
works, 
SolutionTransfer gives the correct result for all cases.
n_locally_owned_dofs can be zero in the first two cases, because distribution 
is 
based on distributing the cells, not dof, but correct me if I am wrong.

Maybe this helps,

Best,
Martin




________________________________

 Hi Martin,

thanks for checking.
I realized that this was happening on a very coarse mesh and not     anymore on 
a finer mesh.
In fact, this is happening when one n_locally_owned_dofs = 0.
I asked Wolfgang and he told me this should not happen, even in that     case.

Would you mind trying to run the attached code and let me know?
Many thanks for your help.
Andrea 

On 6/24/11 7:16 PM, Martin Steigemann wrote: 
 
>Hi Andrea,
> 
>after your answer I checked again           my code and run several tests, but 
>everything works ... I also           had a look in the code of 
>SolutionTransfer 
>and checked, if it           really runs on every process, but it does.
> 
>My code is similar to yours, I           have a PETSc:.MPI vector on a 
>distributed triangulation and           after some calculations I transfer the 
>solution over the           refined mesh, exactly as you do it, maybe I miss 
>something ...           all that I can say is, that it works on 2 up to 120 
>cores.
> 
>Have you updated to the latest           deal version and is your MPi 
>installation correct? have you           trouble with other things in 
>parallel? 
>Maybe there is another           problem or something is wrong with you 
>installation.
> 
>Maybe this helps,
> 
>Best,
> 
>Martin
> 
> 
>----- Original Message ----- 
>>From: Andrea Bonito 
>>To: Martin Steigemann 
>>Sent: Thursday, June 23,           2011 5:56 PM
>>Subject: Re: [deal.II]           solution transfer in parallel::triangulation
>>
>>
Hi Martin,
>>
>>
>>solution.update_ghost_values()
>>>
>>>
this was indeed done.
>
>
>>and I think there is something missing, what does               
>>sol_trans(interpolated) do? Have you tried               
>>sol_trans.interpolate(interpolated)?
>>
>>
yes of coarse, what I meant was
sol_trans.interpolate(interpolated)


any suggestion? how did you fix your issue?

Thanks,
Andrea

Maybe this helps,
>
>Best,
>
>Martin
>
>
>
>
>>*******
>>VECTOR                 solution(mpi_comm,locally_owned,locally_relevant);
>>parallel::distributed::triangulation tria(mpi_comm);
>>
>>...
>>
>>parallel::distributed::SolutionTransfer<dim,VECTOR>                 sol_trans;
>>tria.set_all_refine_flags();
>>tria.prepare_for_coarsening_and_refinement();
>>sol_trans.prepare_for_coarsening_and_refinement(solution);
>>tria.execute_coarsening_and_refinement();
><
>>dh.distribute_dofs(fe)
>>VECTOR                 
>>interpolated(mpi_comm,dh.n_dofs(),dh.n_locally_owned_dofs());
>>sol_trans(interpolated)
>>locally_owned = dh.locally_owned_dofs()
>>DoFTools::extract_locally_relevant_dofs(dh,locally_relevant)
>>solution.reinit(mpi_comm,locally_owned,locally_relevant)
>>solution = interpolated
>>solution.update_ghost_values()
>>*******
>>
>>when I run the above code using two procs, only one                 gets out 
>>the 
>>call
>>sol_trans(interpolated)
>>
>>
>>Does anyone has an idea?
>>Thanks,
>>Andrea
>>
>>P.S: it works with one proc...
>>
>>
>>
>>
>>-- Andrea Bonito
>>Texas A&M University
>>Department of Mathematics
>>3368 TAMU
>>College Station, TX 77843-3368
>>Office: Blocker 641F
>>Phone:  +1 979 862 4873
>>Fax:    +1 979 862 4190
>>Website: www.math.tamu.edu/~bonito
>>
>>_______________________________________________
>>dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii
>

-- 
Andrea Bonito
Texas A&M University
Department of Mathematics
3368 TAMU
College Station, TX 77843-3368
Office: Blocker 641F
Phone:  +1 979 862 4873
Fax:    +1 979 862 4190
Website: www.math.tamu.edu/~bonito 

-- 
Andrea Bonito
Texas A&M University
Department of Mathematics
3368 TAMU
College Station, TX 77843-3368
Office: Blocker 641F
Phone:  +1 979 862 4873
Fax:    +1 979 862 4190
Website: www.math.tamu.edu/~bonito
_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to