Hi Dan,

I created a GitHub Discussion for this email here:

https://github.com/libMesh/libmesh/discussions/3086

GitHub discussions seem to be a more effective and productive way for
people to ask/answer questions than the mailing list these days.

--
John


On Wed, Nov 10, 2021 at 1:35 PM Lior, Dan Uri <dan.l...@bcm.edu> wrote:

> The following code fragment is part of a program that behaves correctly
> when invoked to run on a single processor (ie using : mpirun -n 1 main)
>
>     libMesh::BoundaryInfo& bdry_info = tet_mesh.get_boundary_info();
>     for (auto pp_elem = tet_mesh.local_elements_begin(); pp_elem !=
> tet_mesh.local_elements_end(); ++pp_elem)
>     {
>         Elem* elem = *pp_elem;
>         for( unsigned int is = 0; is < 4; ++is)
>             if ( /* stuff */)
>                 bdry_info.add_side(elem, is, sideset_id);
>     }
>
> However, when I invoke it with several processors (ie using mpirun -n 4
> main), the BoundaryInfo object does not contain all of the sidesets that it
> contains on a single processor.
>
> I suspect that the issue here is that several processors are writing to
> the same memory location (ie the BoundaryInfo object) without appropriate
> synchonization/mutex code.
>
> I certainly didn't add any such code.
>
> Is my suspicion correct or are there other problems with the code?
>
> In the former case, can someone point me to some correct code in which
> mesh elements are processed in parallel and a common data structure is
> modified by each thread/process?
>
>
> dan
>
> _______________________________________________
> Libmesh-users mailing list
> Libmesh-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>

_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to