On Fri 7. Apr 2023 at 07:06, Astor Piaz <appiazzo...@gmail.com> wrote:

> Hello petsc-users,
> I am trying to use a code that is parallelized with a combination of
> OpenMP and MKL parallelisms, where OpenMP threads are able to spawn MPI
> processes.
>

Is this really the correct way to go?


Would it not be more suitable (or simpler) to run your application on an
 MPI sub communicator which maps one rank to say one compute node, and then
within each rank of the sub comm you utilize your threaded OpenMP / MKL
code using as many physical threads as there are cores/ node  (and or hyper
threads if that’s is effective for you)?

Thanks,
Dave

I have carefully scheduled the processes such that the right amount is
> launched, at the right time.
> When trying to use my code inside a MatShell (for later use in an FGMRES
> KSPSolver), MKL processes are not being used.
>
> I am sorry if this has been asked before.
> What configuration should I use in order to profit from MPI+OpenMP+MKL
> parallelism?
>
> Thank you!
> --
> Astor
>

Reply via email to