On Sun, Apr 27, 2014 at 8:45 PM, Manav Bhatia <bhatiama...@gmail.com> wrote:

> Hi,
>
>     What is the best method to create a ParallelMesh from GmshIO? The
> assert in GmshIO::read_mesh() does not allow this to be used in parallel.
>

In theory, you should be able to read the Mesh on one processor and then
broadcast it to others, but this doesn't currently work.  See the sample
code (run with mpiexec -n 2 for example) and assert message below.  At the
end of the broadcast there is some disagreement on how many elements the
Mesh has on each processor, probably because GmshIO isn't doing something
quite right (it has never been tested with ParallelMesh to my knowledge).

#include "libmesh/libmesh.h"
> #include "libmesh/parallel_mesh.h"
> #include "libmesh/mesh_communication.h"
> #include "libmesh/gmsh_io.h"
> int main (int argc, char** argv)
> {
>   LibMeshInit init(argc, argv);
>   {
>     ParallelMesh pmesh;
>     // Note: UnstructuredMesh::read() calls broadcast(), so it can't be
>     // called only on one processor.
>     if (libMesh::global_processor_id() == 0)
>       {
>         // Construct the Gmsh reader object
>         GmshIO gmsh_io(pmesh);
>         gmsh_io.read("plate_hole.msh");
>       }
>     // Make sure everyone is present and accounted for.
>     Parallel::barrier();
>     // Fails with:
>     // Assertion `mesh.comm().verify(mesh.n_elem())' failed.
>     MeshCommunication mc;
>     mc.broadcast(pmesh);
>   }
>   return 0;
> }



-- 
John
------------------------------------------------------------------------------
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free."
http://p.sf.net/sfu/SauceLabs
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to