Hi,

Catalyst by default uses MPI_COMM_WORLD of the existing MPI library that
the simulation code is linked with. You can use another MPI communicator as
well. An example of that is in the
Examples/Catalyst/MPISubCommunicatorExample source directory.

Best,
Andy

On Sat, Oct 28, 2017 at 7:50 AM, Kolja Petersen <petersenko...@gmail.com>
wrote:

> Hello,
> I am trying to understand a Catalyst implementation detail.
>
> Because parallel Catalyst may transfer huge data to a parallel pvserver, I
> thought the Catalyst processes would have themselves added to the
> pvserver's MPI communicator. However, MPI_Comm_spawn() is the only function
> that I know of for this task, and I find "MPI_Comm_spawn" nowhere in the
> code (searched case insensitive).
>
> I thought that the standard Catalyst TCP port 22222 was only used for
> control messages between Catalyst and pvserver, and data exchange would go
> via MPI. But apparently there is no MPI connection between Catalyst and
> pvserver, and all data are sent via TCP:22222, which could explain observed
> network bottlenecks.
>
> Can somebody clarify this implementation detail?
> Thanks
> Kolja
>
> _______________________________________________
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/
> opensource/opensource.html
>
> Please keep messages on-topic and check the ParaView Wiki at:
> http://paraview.org/Wiki/ParaView
>
> Search the list archives at: http://markmail.org/search/?q=ParaView
>
> Follow this link to subscribe/unsubscribe:
> http://public.kitware.com/mailman/listinfo/paraview
>
>
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to