it a shot. I'm doing this on some SGI and Cray machines, I don't know
> if that has special ways to do this like you mentioned exists at NERSC.
>
>
> Thanks,
>
>
> Tim
>
>
> --
> *From:* Andy Bauer
> *Sent:* Tuesday, Octo
__
From: Andy Bauer
Sent: Tuesday, October 25, 2016 4:43 PM
To: Ufuk Utku Turuncoglu (BE)
Cc: Gallagher, Timothy P; paraview@paraview.org
Subject: Re: [Paraview] Non-blocking coprocessing
Hi Tim,
This may be better to do as an in transit set up. This way the processes would
be inde
Hi Tim,
This may be better to do as an in transit set up. This way the processes
would be independent. Through Catalyst I'd worry about all of the processes
waiting on the global rank 0 doing work before all of the other Catalyst
ranks return control to the simulation. Depending on the system you'
Hi Tim,
I am not sure about the non-blocking type communication is supported by
ParaView, Catalyst or not but i think that assigning an extra core for
global reduction is possible. You could use MPI communication for this
purpose. So, look at following code of mine for overloaded
coprocessori
Hello again!
I'm looking at using coprocessing for something that may take awhile to
actually compute, so I would like to do it in a non-blocking fashion.
Essentially I am going to be extracting data from the simulation into some
numpy arrays (so once copied, the original data in the pipeline