Wen Jiang writes:
> Thanks for this information. Could you tell me an efficient way to do this
> in PETSc? I am planning to use at least 32 threads and need to minimize the
> synchronization overhead Any suggestions?
You're probably better off using more MPI processes. Perhaps
surprisingly give
Thanks for this information. Could you tell me an efficient way to do this
in PETSc? I am planning to use at least 32 threads and need to minimize the
synchronization overhead Any suggestions?
Thanks!
Wen
On Mon, Jun 23, 2014 at 10:59 PM, Jed Brown wrote:
> Wen Jiang writes:
>
> > Dear all,
>
Wen Jiang writes:
> Dear all,
>
> I am trying to change my MPI finite element code to OPENMP one. I am not
> familiar with the usage of OPENMP in PETSc and could anyone give me some
> suggestions?
>
> To assemble the matrix in parallel using OpenMP pragmas, can I directly
> call MATSETVALUES(ADD_
Dear all,
I am trying to change my MPI finite element code to OPENMP one. I am not
familiar with the usage of OPENMP in PETSc and could anyone give me some
suggestions?
To assemble the matrix in parallel using OpenMP pragmas, can I directly
call MATSETVALUES(ADD_VALUES) or do I need to add some l