Currently Catalyst sends its data to pvserver through sockets which will
not likely not utilize an HPC's fast interconnect. We hope to address this
in the future using ADIOS but I don't have a timetable on when that will be
done.

On Sat, Oct 28, 2017 at 12:57 PM, Kolja Petersen <petersenko...@gmail.com>
wrote:

>
>
> On Sat, Oct 28, 2017 at 5:07 PM, Andy Bauer <andy.ba...@kitware.com>
> wrote:
>
>> FYI: pvserver will likely be run in a separate MPI job if you're doing a
>> Live connection.
>>
>
> Yes, so the pvserver MPI job will have one MPI_COMM_WORLD, and the
> Catalyst enabled simulation will have a different MPI_COMM_WORLD.
>
> The question is how does Catalyst send its data to the other communicator?
> Afaik, their is no connection between the two unless the second
> communicator is spawned from the first by MPI_Comm_spawn().
> Kolja
>
>
>> On Sat, Oct 28, 2017 at 11:05 AM, Andy Bauer <andy.ba...@kitware.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Catalyst by default uses MPI_COMM_WORLD of the existing MPI library that
>>> the simulation code is linked with. You can use another MPI communicator as
>>> well. An example of that is in the 
>>> Examples/Catalyst/MPISubCommunicatorExample
>>> source directory.
>>>
>>> Best,
>>> Andy
>>>
>>> On Sat, Oct 28, 2017 at 7:50 AM, Kolja Petersen <petersenko...@gmail.com
>>> > wrote:
>>>
>>>> Hello,
>>>> I am trying to understand a Catalyst implementation detail.
>>>>
>>>> Because parallel Catalyst may transfer huge data to a parallel
>>>> pvserver, I thought the Catalyst processes would have themselves added to
>>>> the pvserver's MPI communicator. However, MPI_Comm_spawn() is the only
>>>> function that I know of for this task, and I find "MPI_Comm_spawn" nowhere
>>>> in the code (searched case insensitive).
>>>>
>>>> I thought that the standard Catalyst TCP port 22222 was only used for
>>>> control messages between Catalyst and pvserver, and data exchange would go
>>>> via MPI. But apparently there is no MPI connection between Catalyst and
>>>> pvserver, and all data are sent via TCP:22222, which could explain observed
>>>> network bottlenecks.
>>>>
>>>> Can somebody clarify this implementation detail?
>>>> Thanks
>>>> Kolja
>>>>
>>>> _______________________________________________
>>>> Powered by www.kitware.com
>>>>
>>>> Visit other Kitware open-source projects at
>>>> http://www.kitware.com/opensource/opensource.html
>>>>
>>>> Please keep messages on-topic and check the ParaView Wiki at:
>>>> http://paraview.org/Wiki/ParaView
>>>>
>>>> Search the list archives at: http://markmail.org/search/?q=ParaView
>>>>
>>>> Follow this link to subscribe/unsubscribe:
>>>> http://public.kitware.com/mailman/listinfo/paraview
>>>>
>>>>
>>>
>>
>
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to