Thanks for the help.

I didn't want to get too far into the weeds about the numerical method, just that I have a block tridiagonal system that needs to be solved.  If it helps any, the system comes from an ADI scheme on the Navier-Stokes equations.  The [A], [B], and [C] block matrices correspond to the [Q]_i-1, [Q]_i, and [Q]_i+1 vector unknowns (density, momentum, energy, species, etc.) for each I, J, K sweep through the solution grid.   So, technically, I do need to solve batches of tridiagonal problems.  I'll take a look at your solvers as it seems to be less heavy than PETSc.

Thanks,

John

On 9/6/2019 5:32 PM, Jed Brown wrote:
Where do your tridiagonal systems come from?  Do you need to solve one
at a time, or batches of tridiagonal problems?

Although it is not in PETSc, we have some work on solving the sort of
tridiagonal systems that arise in compact discretizations, which it
turns out can be solved much faster than generic tridiagonal problems.

   https://tridiaglu.github.io/index.html

"John L. Papp via petsc-users" <petsc-users@mcs.anl.gov> writes:

Hello,

I need a parallel block tridiagonal solver and thought PETSc would be
perfect.  However, there seems to be no specific example showing exactly
which VecCreate and MatCreate functions to use.  I searched the archive
and the web and there is no explicit block tridiagonal examples
(although ex23.c example solves a tridiagonal matrix) and the manual is
vague on the subject.  So a couple of questions:

  1. Is it better to create a monolithic matrix (MatCreateAIJ) and vector
     (VecCreate)?
  2. Is it better to create a block matrix (MatCreateBAIJ) and vector
     (VecCreate and then VecSetBlockSize or is there an equivalent block
     vector create)?
  3. What is the best parallel solver(s) to invert the Dx=b when D is a
     block tridiagonal matrix?

If this helps, each row will be owned by the same process.  In other
words, the data used to fill the [A] [B] [C] block matrices in a row of
the D block tridiagonal matrix will reside on the same process.  Hence,
I don't need to store the individual [A], [B], and [C] block matrices in
parallel, just the over all block tridiagonal matrix on a row by row basis.

Thanks in advance,

John

--
**************************************************************
Dr. John Papp
Senior Research Scientist
CRAFT Tech.
6210 Kellers Church Road
Pipersville, PA 18947

Email:  jp...@craft-tech.com
Phone:  (215) 766-1520
Fax  :  (215) 766-1524
Web  :  http://www.craft-tech.com

**************************************************************

--
**************************************************************
Dr. John Papp
Senior Research Scientist
CRAFT Tech.
6210 Kellers Church Road
Pipersville, PA 18947

Email:  jp...@craft-tech.com
Phone:  (215) 766-1520
Fax  :  (215) 766-1524
Web  :  http://www.craft-tech.com

**************************************************************

Reply via email to