There is no support for threaded matrix assembly in PETSc. Here is a recent email thread on the issue:
https://mail.google.com/mail/u/0/#search/label%3Apetsc+thread+assembly/15f10d078e9ea8e7 So you pretty much have to deal with race conditions yourself. There are several failure modes with threads: 1) Off processor entries are stashed in a global data structure for a scatter/gather stage during matrix finalize. A simple fix for this is to have every processor compute all elements that touch its vertices (overlapping element decomposition) and then have PETSc ignore off processor entries. This is how I do it to simply avoid communication with redundant computation. It also avoid synchronization. 2) The 1D array data structure is reconstructed when the data spills the preallocated memory. Solution: allocate memory exactly. 3) Just normal race conditions. Coloring is the basic approach to this problem, although I think Jed thinks this is not sufficient in PETSc as is. 4) unknown unknowns. Mark On Tue, Oct 17, 2017 at 3:24 PM, Yoon, Eisung <yo...@rpi.edu> wrote: > Hi Mark, > > > > Seegyoung here in SCOREC is looking for a way to assemble elements for a > global matrix with PETSC using threads. I’m aware that you have installed > thread-safe PETSC into NERSC system for XGC and you know details about > PETSC. > > So could you tell us if PETSC supports thread-safe global matrix assembly > with thread-safe version of PETSC, and some details, please? > > Seegyoung might describe more details about her problem later. > > > > Thank you. > > Best, > > ES >