> On Mar 11, 2019, at 11:11 PM, Maahi Talukder wrote:
>
> Hi
> Thank you so much for explanation.
>
> So when you say connecting two points on the grid in the matrix entry, what
> do you mean by that? Do you mean that matrix entry is calculated using two
> points ( (i,j) and (i',j') ) on th
> On Mar 11, 2019, at 10:01 PM, Maahi Talukder wrote:
>
> Hi
> Thank you for your explanation.
> So is it so that it always connects two points?
Entries in matrices represent the connection between points in a vector
(including the diagonal entries that are connections between a point an
> On Mar 11, 2019, at 7:07 PM, Maahi Talukder via petsc-users
> wrote:
>
>
> Thank you for your reply.
>
> I still have some confusion. So if (i,j) is a point on the structured grid(
> Where "i" is the column and "j" is the row), and the information associated
> with the (i,j) point on th
Seems a problem with the MPI install. You could try compiling and running a
same MPI (only) code to see if MPI_Init() then MPI_Finalize() succeed or not.
Barry
> On Mar 11, 2019, at 4:47 PM, Edoardo alinovi
> wrote:
>
> Thanks Barry for the help as usual.
>
> Attached the errors. M
Not good. Run the test manually, then in the debugger to see exactly where
it is crashing.
cd src/snes/examples/tutorials
make ex19
./ex19
gdb (or the intel debugger) ./ex19
run
Send all the output
> On Mar 11, 2019, at 4:13 PM, Edoardo alinovi via petsc-users
Yuyun,
DMDA is an add-on on top of Vec/Mat (it doesn't replace anything in them)
DMDA manages the parallel layout of your structured grid across the
processes so you don't have to manage that yourself. So, for structured grids
using DMDA is actually easier than you having to manage
Hello team,
May I know for what types of computations is DMDA better to use compared to
regular Vec/Mat? It is more complicated in terms of usage, thus so far I've
only used Vec/Mat. Would DMDA improve the performance of solving large linear
systems (say for variable grid spacing as a result of
> On Mar 11, 2019, at 9:42 AM, Pietro Benedusi via petsc-users
> wrote:
>
> Dear Petsc team,
>
> I have a question about the setting up of a multigrid solver.
>
> I would like yo use a PCG smoother, preconditioned with a mass matrix, just
> on the fine level.
> But when add the line for pre
You are giving all levels the same matrices (K & M). This code should not
work.
You are using LU as the smother. This will solve the problem immediately.
If MG is setup correctly then you will just have zero residuals and
corrections for the rest of the solve. And you set the relative tolerance
to
Dear Petsc team,
I have a question about the setting up of a multigrid solver.
I would like yo use a PCG smoother, preconditioned with a mass matrix, just on
the fine level.
But when add the line for preconditioning the CG with the mass matrix my MG
diverges.
I have implemented the same solver
There is a small difference in memory usage already (of 135mB). It is
not a big deal but it will be for larger problems (as shown by the
memory scaling). If we find the origin of this small gap for a small
case, we probably find the reason why the memory scaling is so bad with
3.10.
I am currently
Is there a difference in memory usage on your tiny problem? I assume no.
I don't see anything that could come from GAMG other than the RAP stuff
that you have discussed already.
On Mon, Mar 11, 2019 at 9:32 AM Myriam Peyrounette <
myriam.peyroune...@idris.fr> wrote:
> The code I am using here is
The PETSc logs print the max time and the ratio max/min.
On Mon, Mar 11, 2019 at 8:24 AM Ale Foggia via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello all,
>
> Thanks for your answers.
>
> 1) I'm working with a matrix with a linear size of 2**34, but it's a
> sparse matrix, and the number
The code I am using here is the example 42 of PETSc
(https://www.mcs.anl.gov/petsc/petsc-3.9/src/ksp/ksp/examples/tutorials/ex42.c.html).
Indeed it solves the Stokes equation. I thought it was a good idea to
use an example you might know (and didn't find any that uses GAMG
functions). I just change
Dear Matt,
I understood that you are right. I changed sizeof(values) with ncols, it
gives matrix correctly.
However, now I get an error in EPSGetEigenpair:
0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Argument o
Dear Matt,
I printed in wrong state, ncols gives right solution.
But I still can't understand the first problem.
Eda
Eda Oktay , 11 Mar 2019 Pzt, 16:05 tarihinde şunu
yazdı:
> Dear Matt,
>
> Thank you for answering. First of all, sizeof(vals) returns to number of
> entries, I checked. Secondly
Dear Matt,
Thank you for answering. First of all, sizeof(vals) returns to number of
entries, I checked. Secondly, I found a problem:
ncols gives me 6.95328e-310. However, I checked the matrix L, it was
computed properly.
Why can ncols give such a value?
Thanks,
Eda
Matthew Knepley , 11 Mar 20
On Mon, Mar 11, 2019 at 8:27 AM Eda Oktay via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello,
>
> I have a following part of a code which tries to change the nonzero values
> of matrix L with -1. However in MatSetValues line, something happens and
> some of the values in matrix turns into 1
In looking at this larger scale run ...
* Your eigen estimates are much lower than your tiny test problem. But
this is Stokes apparently and it should not work anyway. Maybe you have a
small time step that adds a lot of mass that brings the eigen estimates
down. And your min eigenvalue (not used)
Hello,
I have a following part of a code which tries to change the nonzero values
of matrix L with -1. However in MatSetValues line, something happens and
some of the values in matrix turns into 1.99665e-314 instead of -1. Type of
arr is defined as PetscScalar and arr is produced correctly. What c
GAMG look fine here but the convergence rate looks terrible, like 4k+
iterations. You have 4 degrees of freedom per vertex. What equations and
discretization are you using?
Your eigen estimates are a little high, but not crazy. I assume this system
is not symmetric.
AMG is oriented toward the lap
Hello,
I need to solve a 2*2 block linear system. The matrices A_00, A_01,
A_10, A_11 are constructed separately via MatCreateSeqAIJWithArrays and
MatCreateSeqSBAIJWithArrays. Then, I construct the full system matrix
with MatCreateNest, and use MatNestGetISs and PCFieldSplitSetIS to set
up th
Hi,
good point, I changed the 3.10 version so that it is configured with
--with-debugging=0. You'll find attached the output of the new LogView.
The execution time is reduced (although still not as good as 3.6) but I
can't see any improvement with regard to memory.
You'll also find attached the g
23 matches
Mail list logo