ew Knepley
> *Sent:* Tuesday, August 4, 2020 7:29 PM
> *To:* Anthony Paul Haas
> *Cc:* petsc-users
> *Subject:* Re: [petsc-users] reuse Mumps factorization for multiple RHS
>
> On Tue, Aug 4, 2020 at 7:57 PM Anthony Paul Haas
> wrote:
>
> Hello,
>
> When us
See case '"-num_rhs" in petsc/src/ksp/ksp/tests/ex30.c
Hong
From: petsc-users on behalf of Matthew
Knepley
Sent: Tuesday, August 4, 2020 7:29 PM
To: Anthony Paul Haas
Cc: petsc-users
Subject: Re: [petsc-users] reuse Mumps factorization for m
On Tue, Aug 4, 2020 at 7:57 PM Anthony Paul Haas
wrote:
> Hello,
>
> When using Mumps to solve a linear system of equations (see below), can I
> reuse the factorization to solve for multiple RHS, ie, can I use KSPSolve
> multiple
> times while only building a different RHS in
Yes, it automatically uses the same factorization.
> On Aug 4, 2020, at 6:55 PM, Anthony Paul Haas wrote:
>
> Hello,
>
> When using Mumps to solve a linear system of equations (see below), can I
> reuse the factorization to solve for multiple RHS, ie, can I use KSPSolve
Hello,
When using Mumps to solve a linear system of equations (see below), can I
reuse the factorization to solve for multiple RHS, ie, can I use
KSPSolve multiple
times while only building a different RHS in between the calls to KSPSolve?
Thanks,
Anthony
call KSPSetType(self%ksp,KSPPREONLY
have many right hand sides it obviously pays to
> > > spend more time building the preconditioner so that each solve is faster.
> > > If you provide more information on your linear system we might have
> > > suggestions. CFD so is your linear system a Poisson problem? Are
oner so that each solve is faster.
> If you provide more information on your linear system we might have
> suggestions. CFD so is your linear system a Poisson problem? Are you using
> geometric or algebraic multigrid with PETSc? It not a Poisson problem how
> can you describe the
ns.
> > CFD so is your linear system a Poisson problem? Are you using geometric or
> > algebraic multigrid with PETSc? It not a Poisson problem how can you
> > describe the linear system?
> >
> > Barry
> >
> >
> >
> > > On Aug 10, 2016, at
the linear system?
> >
> > Barry
> >
> >
> >
> > > On Aug 10, 2016, at 9:54 PM, Harshad Ranadive <
> harshadranad...@gmail.com> wrote:
> > >
> > > Hi All,
> > >
> > > I have currently added the PETSc library
Barry
>
>
>
> > On Aug 10, 2016, at 9:54 PM, Harshad Ranadive
> > wrote:
> >
> > Hi All,
> >
> > I have currently added the PETSc library with our CFD solver.
> >
> > In this I need to use KSPSolve(...) multiple time for the same matrix A
gt;
> In this I need to use KSPSolve(...) multiple time for the same matrix A. I
> have read that PETSc does not support passing multiple RHS vectors in the
> form of a matrix and the only solution to this is calling KSPSolve multiple
> times as in example given here:
> http://www
On Wed, Aug 10, 2016 at 9:54 PM, Harshad Ranadive wrote:
> Hi All,
>
> I have currently added the PETSc library with our CFD solver.
>
> In this I need to use KSPSolve(...) multiple time for the same matrix A. I
> have read that PETSc does not support passing multiple RHS vec
Hi All,
I have currently added the PETSc library with our CFD solver.
In this I need to use KSPSolve(...) multiple time for the same matrix A. I
have read that PETSc does not support passing multiple RHS vectors in the
form of a matrix and the only solution to this is calling KSPSolve multiple
On Sun, Nov 16, 2014 at 10:11 AM, Evan Um wrote:
> Dear Matthew,
>
> Thanks for your comments. I will prepare log summary. To activate an
> separate log stage for each iteration (i.e. each RHS vector), I tried the
> code below, but got an error. Could you give me a comment? Does
> PetscLogView()
Dear Matthew,
Thanks for your comments. I will prepare log summary. To activate an
separate log stage for each iteration (i.e. each RHS vector), I tried the
code below, but got an error. Could you give me a comment? Does
PetscLogView() allow users to use different output file names such that
each
On Sat, Nov 15, 2014 at 9:24 PM, Evan Um wrote:
> Dear PETSC users,
>
> I would like to show you a performance issue when Cholesky factor is
> re-used as a direct solver or pre-conditioner many times with many
> right-hand side vectors. Does anyone suggest a solution about this issue?
> In advanc
Dear PETSC users,
I would like to show you a performance issue when Cholesky factor is
re-used as a direct solver or pre-conditioner many times with many
right-hand side vectors. Does anyone suggest a solution about this issue?
In advance, thanks for your help.
Regards,
Evan
Example 1: I used MU
I will use thousands of them,
> moreover it will require a lot of memory to store them as a dense matrix.
>
> On 11.12.2011 18:50, Barry Smith wrote:
>> On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
>>One by one
>
> I'm wondering why? All main direct packa
common to have sparse RHS.
I am wondering how about other disciplines?
I don't know about other solvers, but when I use multiple RHS in mumps
it works several times faster then solving for them sequentially. It is
just my experience.
And I also can imagine that using sparsity is another obv
solve for many RHS very quickly.
> Surely I don't expect my solution to be sparse and it is not, but at least
> in electromagnetics it is pretty common to have sparse RHS.
> I am wondering how about other disciplines?
>
> I don't know about other solvers, but when I use mu
11.12.2011 18:50, Barry Smith wrote:
> On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
> One by one
I'm wondering why? All main direct packages like MUMPS, SuperLU_DIST,
PaSTiX support multiple RHS.
> We do not handle a sparse right hand side.
Since I already transfer
se matrix.
>>
>> On 11.12.2011 18:50, Barry Smith wrote:
>>> On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
>>> ? ?One by one
>>
>> I'm wondering why? All main direct packages like MUMPS, SuperLU_DIST, PaSTiX
>> support multiple RH
ot of memory to store them as a dense matrix.
>
> On 11.12.2011 18:50, Barry Smith wrote:
>> On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
>>One by one
>
> I'm wondering why? All main direct packages like MUMPS, SuperLU_DIST, PaSTiX
> support m
hem,
> moreover it will require a lot of memory to store them as a dense matrix.
>
> On 11.12.2011 18:50, Barry Smith wrote:
>
>> On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
>>One by one
>>
>
> I'm wondering why? All main direct packages lik
Hello,
I used to use MUMPS directly with sparse multiple RHS. Now I use MUMPS
through PETSc interface and solution for multiple RHS takes 1.5-2 times
longer (MatMatSolve).
My first question is whether do you use multiple RHS internally or you
solve one-by-one?
Second guess concerns the option
On Dec 11, 2011, at 9:29 AM, Alexander Grayver wrote:
> Hello,
>
> I used to use MUMPS directly with sparse multiple RHS. Now I use MUMPS
> through PETSc interface and solution for multiple RHS takes 1.5-2 times
> longer (MatMatSolve).
> My first question is whether do yo
Hello,
First of all I'm new to PETSc so please be pacient with me.
I'm trying to solve 2 linear systems with the same A matrix using
superlu_dist, and so i'm using the same ksp context for both systems.
The matrix is a square matrix of 84719 with 351289 nonzero elements.
The time for the
On Oct 26, 2011, at 10:34 AM, Bogdan Dita wrote:
>
> Hello,
>
> First of all I'm new to PETSc so please be pacient with me.
> I'm trying to solve 2 linear systems with the same A matrix using
> superlu_dist, and so i'm using the same ksp context for both systems.
> The matrix is a square
On Thu, Mar 26, 2009 at 1:03 AM, Yujie wrote:
> Hi, PETSc developers
>
> I am wondering what the difference is when iterative preconditioners (such
> as ILU, sparse approximation inverse and so on) are used in single and
> multiple rhs using KSPSolve().
>
> In m
Hi, PETSc developers
I am wondering what the difference is when iterative preconditioners (such
as ILU, sparse approximation inverse and so on) are used in single and
multiple rhs using KSPSolve().
In multiple rhs case, the preconditioners are made to each rhs? Thanks a
lot.
Regards,
Yujie
>>> website looks like it hasn't been updated since 2007. Maybe PLAPACK
>>> is in need of some maintenance? You said "nonsquare", is plapack working
>>> for you for square matrices ?
>>
>> Yes, it works for square matrices.
>> See ~petsc/src/mat/examples/tests/ex103.c and ex107.c
>>
>> Hong
>
>
On Sat, 14 Mar 2009, Hong Zhang wrote:
>
>> Very Many Thanks for your efforts on this Barry. The PLAPACK
>> website looks like it hasn't been updated since 2007. Maybe PLAPACK
>> is in need of some maintenance? You said "nonsquare", is plapack working
>> for you for square matrices ?
>
> Yes, it
> Very Many Thanks for your efforts on this Barry. The PLAPACK
> website looks like it hasn't been updated since 2007. Maybe PLAPACK
> is in need of some maintenance? You said "nonsquare", is plapack working for
> you for square matrices ?
Yes, it works for square matrices.
See ~petsc/src/mat/ex
Very Many Thanks for your efforts on this Barry. The PLAPACK
website looks like it hasn't been updated since 2007. Maybe PLAPACK
is in need of some maintenance? You said "nonsquare",
is plapack working for you for square matrices ?
thanks again,
df
>>
>>
>>
>> [0]PETSC ERROR: -
On Mar 12, 2009, at 8:11 PM, David Fuentes wrote:
>
> I'm getting plapack errors in "external library" with
>
> MatMatMult_MPIDense_MPIDense
>
> with plapack? How is memory handled for a matrix
> of type MATMPIDENSE? Are all NxN entries allocated and ready for
> use at time of creation?
Yes
>
> What solver would I use to do a factorization of a dense parallel matrix w/
> plapack?
MAT_SOLVER_PLAPACK.
See ~petsc-3.0.0/src/mat/examples/tests/ex103.c
Hong
>
> I don't see a
>
> MPI_SOLVER_PLAPACK
>
> ?
>
>
>
>
>
> On Thu, 12 Mar 2009, Hong Zhang wrote:
>
>>
Is MatCreateMP
I'm getting plapack errors in "external library" with
MatMatMult_MPIDense_MPIDense
with plapack? How is memory handled for a matrix
of type MATMPIDENSE? Are all NxN entries allocated and ready for
use at time of creation? or do I have to MatInsertValues
then Assemble to be ready to use a matri
Hi Hong,
What solver would I use to do
a factorization of a dense parallel matrix w/ plapack?
I don't see a
MPI_SOLVER_PLAPACK
?
On Thu, 12 Mar 2009, Hong Zhang wrote:
>
>>>
>>> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ?
>>> Does it use a sparse direct stor
>>
>> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ?
>> Does it use a sparse direct storage or allocate the full n x n matrix?
>
>
> No, MUMPS is "sparse direct" so it uses MPIAIJ.
For mpi dense matrix, you can use plapack
Hong
>
>
>>
>> df
>>
>> On Thu, 12 Mar 2009, Ma
On Thu, Mar 12, 2009 at 1:50 PM, David Fuentes wrote:
> Thanks Matt,
>
> Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ?
> Does it use a sparse direct storage or allocate the full n x n matrix?
No, MUMPS is "sparse direct" so it uses MPIAIJ.
Matt
>
> df
>
> On Thu,
Thanks Matt,
Is MatCreateMPIDense the recommended matrix type to interface w/ mumps ?
Does it use a sparse direct storage or allocate the full n x n matrix?
df
On Thu, 12 Mar 2009, Matthew Knepley wrote:
> You can try using a sparse direct solver like MUMPS instead of PETSc LU.
>
> Matt
You can try using a sparse direct solver like MUMPS instead of PETSc LU.
Matt
On Thu, Mar 12, 2009 at 9:17 AM, David Fuentes wrote:
> Thanks Hong,
>
> The complete error message is attached. I think I just had too big
> of a matrix. The matrix i'm trying to factor is 327680 x 327680
>
>
> [0]
Thanks Hong,
The complete error message is attached. I think I just had too big
of a matrix. The matrix i'm trying to factor is 327680 x 327680
[0]PETSC ERROR: - Error Message
[0]PETSC ERROR: Out of memory. This could be due to allocating
David,
I do not see any problem with the calling sequence.
The memory is determined in MatLUFactorSymbolic().
Does your code crashes within MatLUFactorSymbolic()?
Please send us complete error message.
Hong
On Wed, 11 Mar 2009, David Fuentes wrote:
>
> Hello,
>
> I have a sparse matrix, A, wi
Hello,
I have a sparse matrix, A, with which I want to solve multiple right hand sides
with a direct solver. Is this the correct call sequence ?
MatGetFactor(A,MAT_SOLVER_PETSC,MAT_FACTOR_LU,&Afact);
IS isrow,iscol;
MatGetOrdering(A,MATORDERING_ND,&isrow,&iscol);
MatLUFactorSymboli
45 matches
Mail list logo