Thanks John, I modified the operator() and it works now:
void AugmentSparsityToDense::operator()(const MeshBase::const_element_iterator & range_begin, const MeshBase::const_element_iterator & range_end, processor_id_type p, map_type & coupled_elements) { const CouplingMatrix * const null_mat = nullptr; for (const auto & elem : as_range(range_begin, range_end)) { coupled_elements.emplace(elem, null_mat); for (const auto & elem_remote : _mesh.active_element_ptr_range()) coupled_elements.emplace(elem_remote, null_mat); } } I originally thought the range_begin and range_end are for all active elements, but it turns out that (at least in my use case for debugging) the range_begin and range_end is giving me one element at a time. Therefore I have to explicitly loop over all the active elements to attach the coupling matrix. I think the documentation on the operator() needs to be improved unless I missed something there. Now it works and it is indeed much faster compared to when I had "*total number of mallocs used during MatSetValues calls != 0*". Regarding the dense matrix -- I am happy to submit a PR but I need time to go through the source code (today is my third day using libMesh). I am not trying to implement a fast multipole method. I am just assemblying a Galerkin projection of a Fredholm Integral Equation for spatial covariance. I only need to factorize the covariance matrix once and for all, so I can live with the N^2 assembly time. I am aware of some of the multigrid methods in literature, but I don't really want to deal with the accuracy issues that those methods bring to me. Thanks, Gary On Mon, May 18, 2020 at 9:11 AM John Peterson <jwpeter...@gmail.com> wrote: > > > On Sat, May 16, 2020 at 5:12 PM Gary Hu <hugary1...@gmail.com> wrote: > >> Hello all, >> >> I am new to libMesh. I am trying to solve a generalized eigen-value >> problem >> for an integral equation. I tried example "Solving a generalized Eigen >> Problem" and it works fine. >> >> Next, I modified the assembly routine to have a skeleton like this: >> >> for (const auto & elem : mesh.active_local_element_ptr_range()) >> for (const auto & elem_remote : mesh.active_element_ptr_range()) >> >> So basically for each local element, I am looping over all the active >> elements (local and nonlocal) to assemble the integral equation. I am >> pretty confident that I used fe->reinit and other housekeeping stuff >> properly. >> > > > It sounds like you are trying to build a dense matrix... in such a case > all the extra "bookkeeping" indices etc. that are required for sparse > matrices is not required, so this approach won't scale well. I think you > may want to look into building a dense matrix for your method, e.g. > MatCreateDense in PETSc [0]. We don't have C++ interfaces for MatDense in > libMesh yet since our main focus is FEA but it's something I think we'd > definitely like to have a PR adding. Regarding your original question, I > don't see anything obviously wrong with the GhostingFunctor approach you > are taking, so it may just be a coding bug that's causing it to not work. > I'd start by adding print statements to check whether it's actually being > called and what elements are being added to coupled_elements. I don't know > how much time you want to spend debugging this approach though, because > even if it works I fear it is going to be incredibly slow, not only due to > the storing-dense-matrix-in-sparse-format issue, but also because of the > N^2 assembly loop you've proposed above... perhaps there is an existing way > to speed up whatever you are doing (fast multipole method?) which you > should look into first. > > [0]: > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateDense.html > > -- > John > > > > _______________________________________________ Libmesh-users mailing list Libmesh-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-users