How about something like,
MatMPIAIJGetSeqAIJ(A,NULL,,NULL);
> MatGetOwnershipRange(A, , );
> for (r = 0; r < rE-rS; ++r) {
> sum = 0.0;
> MatGetRow(Ao, r, , NULL, );
> for (c = 0; c < ncols; ++c) sum += PetscAbsScalar(vals[c]);
// do what you need with sum
> }
Barry
> On
"Zhang, Junchao via petsc-users" writes:
> Perhaps PETSc should have a MatGetRemoteRow (or
> MatGetRowOffDiagonalBlock) (A, r, , , ). MatGetRow()
> internally has to allocate memory and sort indices and values from
> local diagonal block and off-diagonal block. It is totally a waste in
> this
On Mon, Mar 4, 2019 at 10:39 AM Matthew Knepley via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
On Mon, Mar 4, 2019 at 11:28 AM Cyrill Vonplanta via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Dear Petsc Users,
I am trying to implement a variant of the $l^1$-Gauss-Seidel
Dear Petsc Users,
I am trying to implement a variant of the $l^1$-Gauss-Seidel smoother from
https://doi.org/10.1137/100798806 (eq. 6.1 and below). One of the main issues
is that I need to compute the sum $\sum_j |a_{i_j}|$ of the matrix entries
that are not part of the local diagonal block.
Hello,
I want to solve many symmetric linear systems one after another in parallel
using boomerAMG + KSPCGÂ and need to make the matrix transfer more efficient.
Matrices are symmetric in structure and values. boomerAMG + KSPCG work fine.
So far I have been loading the entire matrices but I