Hi,
I am using MatSetSizes() followed by MatLoad() to distribute the rows of a
*sparse* matrix (*480000x480000*) across the processors. But it seems like
the entire matrix is getting loaded in each of the processors instead of
distributing it. What am I missing here?

*code snippet:*
    Mat         Js;
    MatType     type = MATMPIAIJ;
    PetscViewer viewerJ;
    PetscCall(PetscViewerBinaryOpen(PETSC_COMM_WORLD, "Js.dat",
FILE_MODE_READ, &viewerJ));
    PetscCall(MatCreate(PETSC_COMM_WORLD, &Js));
    PetscCall(MatSetSizes(Js, PETSC_DECIDE, PETSC_DECIDE, N, N));
    PetscCall(MatSetType(Js, type));
    PetscCall(MatLoad(Js, viewerJ));
    PetscCall(PetscViewerDestroy(&viewerJ));
    PetscCall(MatGetLocalSize(Js, &m, &n));
    PetscCall(MatGetSize(Js, &M, &N));
    PetscPrintf(PETSC_COMM_WORLD, "Js,Local rows: %d, Local columns: %d\n",
m, n);

*Output *of  'mpiexec -n 4 ./check':
Js,Local rows: 480000, Local columns: 480000
Js,Local rows: 480000, Local columns: 480000
Js,Local rows: 480000, Local columns: 480000
Js,Local rows: 480000, Local columns: 480000

Reply via email to