Sorry, I didn't notice these emails for a long time.
PETSc does provide a "simple" mechanism to redistribute your matrix that does
not require you to explicitly do the redistribution.
You must create a MPIAIJ matrix over all the MPI ranks, but simply provide
all the rows on the first r
On Tue, Dec 7, 2021 at 10:04 PM Faraz Hussain
wrote:
> The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The
> application was written to use a variety of different direct solvers but
> Spooles and Pardiso are what I am most familiar with.
>
I assume the CSR3 has the a, i, j arrays u
The matrix in memory is in IJV (Spooles ) or CSR3 ( Pardiso ). The application
was written to use a variety of different direct solvers but Spooles and
Pardiso are what I am most familiar with.
On Tuesday, December 7, 2021, 10:33:24 PM EST, Junchao Zhang
wrote:
On Tue, Dec 7, 202
On Tue, Dec 7, 2021 at 10:25 PM Faraz Hussain
wrote:
> Thanks, that makes sense. I guess I was hoping petsc ksp is like intel's
> cluster sparse solver where it handles distributing the matrix to the other
> ranks for you.
>
> It sounds like that is not the case and I need to manually distribute
On Tue, Dec 7, 2021 at 9:06 PM Faraz Hussain via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you
> wrote, "it efficiently gets the matrix from the file spread out over all
> the ranks.".
>
> However, in my application I only
Thanks, that makes sense. I guess I was hoping petsc ksp is like intel's
cluster sparse solver where it handles distributing the matrix to the other
ranks for you.
It sounds like that is not the case and I need to manually distribute the
matrix to the ranks?
On Tuesday, December 7, 2021
On Tue, Dec 7, 2021 at 10:06 PM Faraz Hussain via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you
> wrote, "it efficiently gets the matrix from the file spread out over all
> the ranks.".
>
> However, in my application I only
Thanks, I took a look at ex10.c in ksp/tutorials . It seems to do as you wrote,
"it efficiently gets the matrix from the file spread out over all the ranks.".
However, in my application I only want rank 0 to read and assemble the matrix.
I do not want other ranks trying to get the matrix data. T
If you use MatLoad() it never has the entire matrix on a single rank at the
same time; it efficiently gets the matrix from the file spread out over all the
ranks.
> On Dec 6, 2021, at 11:04 PM, Faraz Hussain via petsc-users
> wrote:
>
> I am studying the examples but it seems all ranks r
I assume you are using PETSc to load matices. What example are you looking
at?
On Mon, Dec 6, 2021 at 11:04 PM Faraz Hussain via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> I am studying the examples but it seems all ranks read the full matrix. Is
> there an MPI example where only rank 0 read
I am studying the examples but it seems all ranks read the full matrix. Is
there an MPI example where only rank 0 reads the matrix?
I don't want all ranks to read my input matrix and consume a lot of memory
allocating data for the arrays.
I have worked with Intel's cluster sparse solver and t
11 matches
Mail list logo