Hi,
I'm running a CFD code which solves the momentum and Poisson eqns.
Due to poor scaling with HYPRE at higher cpu no., I decided to try using
PETSc with boomeramg and gamg.
I tested for some small cases and it work well. However, for the large
problem which has poor scaling, it gives an er
On Thu, Mar 15, 2018 at 8:18 AM, Manuel Valera
wrote:
> Ok so, i went back and erased the old libpetsc.so.3 i think it was the one
> causing problems, i had --with-shared-libraries=0 and the installation
> complained of not having that file, then reinstalled with
> --with-shared-libraries=1 and i
Ok so, i went back and erased the old libpetsc.so.3 i think it was the one
causing problems, i had --with-shared-libraries=0 and the installation
complained of not having that file, then reinstalled with
--with-shared-libraries=1 and it is finally recognizing my system
installation with only CUDA,
On Thu, Mar 15, 2018 at 4:01 AM, Manuel Valera
wrote:
> Ok well, it turns out the $PETSC_DIR points to the testpetsc directory,
> and it makes, install and tests without problems (only a problem on ex5f)
> but trying to reconfigure on valera/petsc directory asks me to change the
> $PETSC_DIR vari
Hi everybody,
I am trying to follow the advice for output given in the recent
thread on this list
https://lists.mcs.anl.gov/pipermail/petsc-users/2018-February/034546.html
At the end of each timestep in my code I do
{
PetscViewer outputToFile;
char filename[50];
sprintf(filename,
Ok well, it turns out the $PETSC_DIR points to the testpetsc directory, and
it makes, install and tests without problems (only a problem on ex5f) but
trying to reconfigure on valera/petsc directory asks me to change the
$PETSC_DIR variable,
Meanwhile the system installation still points to the val
On Thu, Mar 15, 2018 at 3:25 AM, Manuel Valera
wrote:
> yeah that worked,
>
> [valera@node50 tutorials]$ ./ex19 -dm_vec_type seqcuda -dm_mat_type
> seqaijcusparse
> lid velocity = 0.0625, prandtl # = 1., grashof # = 1.
> Number of SNES iterations = 2
> [valera@node50 tutorials]$
>
> How do i make
yeah that worked,
[valera@node50 tutorials]$ ./ex19 -dm_vec_type seqcuda -dm_mat_type
seqaijcusparse
lid velocity = 0.0625, prandtl # = 1., grashof # = 1.
Number of SNES iterations = 2
[valera@node50 tutorials]$
How do i make sure the other program refer to this installation? using the
same argum
On Thu, Mar 15, 2018 at 3:19 AM, Manuel Valera
wrote:
> Yes, this is the system installation that is being correctly linked (the
> linear solver and model are not linking the correct installation idk why
> yet) i configured with only CUDA this time because of the message Karl Rupp
> posted on my
Yes, this is the system installation that is being correctly linked (the
linear solver and model are not linking the correct installation idk why
yet) i configured with only CUDA this time because of the message Karl Rupp
posted on my installation thread, where he says only one type of library
will
On Thu, Mar 15, 2018 at 3:12 AM, Manuel Valera
wrote:
> Thanks, got this error:
>
Did you not configure with CUSP? It looks like you have CUDA, so use
-dm_vec_type seqcuda
Thanks,
Matt
> [valera@node50 testpetsc]$ cd src/snes/examples/tutorials/
> [valera@node50 tutorials]$ PETSC_AR
Thanks, got this error:
[valera@node50 testpetsc]$ cd src/snes/examples/tutorials/
[valera@node50 tutorials]$ PETSC_ARCH="" make ex19
/usr/lib64/openmpi/bin/mpicc -o ex19.o -c -Wall -Wwrite-strings
-Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector
-fvisibility=hidden -O2 -I/home/valera
On Thu, Mar 15, 2018 at 2:46 AM, Manuel Valera
wrote:
> Ok lets try that, if i go to /home/valera/testpetsc/
> arch-linux2-c-opt/tests/src/snes/examples/tutorials there is runex19.sh
> and a lot of other ex19 variantes, but if i run that i get:
>
knepley/feature-plex-functionals *$:/PETSc3/petsc
Ok lets try that, if i go
to /home/valera/testpetsc/arch-linux2-c-opt/tests/src/snes/examples/tutorials
there is runex19.sh and a lot of other ex19 variantes, but if i run that i
get:
[valera@node50 tutorials]$ ./runex19.sh
not ok snes_tutorials-ex19_1
# ---
On Thu, Mar 15, 2018 at 2:27 AM, Manuel Valera
wrote:
> Ok thanks Matt, i made a smaller case with only the linear solver and a
> 25x25 matrix, the error i have in this case is:
>
Ah, it appears that not all parts of your problem are taking the type
options. If you want the
linear algebra object
Ok thanks Matt, i made a smaller case with only the linear solver and a
25x25 matrix, the error i have in this case is:
[valera@node50 alone]$ mpirun -n 1 ./linsolve -vec_type cusp -mat_type
aijcusparse
laplacian.petsc !
TrivSoln loaded, size: 125 / 125
RHS loaded, size:
Fande,
thank you for your answer. Yes, of course you are totally right and I
could not agree more.
So far we are fine with accuracy/variance of the results. My main
intention is to figure out, if that is really supposed to be like that
due to for example dynamic load balancing of the method or if
On Wed, Mar 14, 2018 at 11:21 PM, Sonia Pozzi wrote:
> Dear Barry,
>
> thank you for the answer. That helped a lot. Just a second curiosity.
> I’m setting A00 to be solved with preonly+lu. I obtain the following
> ksp_0 KSPGetTotalIterations: 26
> ksp_1 KSPGetTotalIterations: 22
> Residual ksp_
Matt can likely answer this but it is always better to try to figure out
things yourself. Everything in PETSc is knowable to the user if you know the
correct places to look.
I would run in the debugger and put a break point in KSPSolve() then do a
bt (or where) at each call to KSPSolv
We had the similar problem before with superlu_dist, but it happened only
when the number of processor cores is larger than 2. Direct solvers, in
our experiences, often involve more messages (especially non-block
communication). Then this causes different operation orders, and have
different resu
Dear Barry,
thank you for the answer. That helped a lot. Just a second curiosity.
I’m setting A00 to be solved with preonly+lu. I obtain the following
ksp_0 KSPGetTotalIterations: 26
ksp_1 KSPGetTotalIterations: 22
Residual ksp_0: 0 Reason ksp_0: 4
Solution ksp_0 : Convergence in 1 iterations.
> On Mar 14, 2018, at 8:47 AM, Sonia Pozzi wrote:
>
> Dear PETSc Developers and Users,
>
> I’m working with PCFieldSplit preconditioner (Schur complement based
> approach).
> To count the number of iterations I’m taking the info from subksp_0 and
> subksp_1.
> I understand that the number o
Dear PETSc Developers and Users,
I’m working with PCFieldSplit preconditioner (Schur complement based approach).
To count the number of iterations I’m taking the info from subksp_0 and
subksp_1.
I understand that the number of its for subksp_1 are related to the call for
the solution of the Sch
Please email the code that fails to petsc-ma...@mcs.anl.gov
> On Mar 14, 2018, at 3:56 AM, Natacha BEREUX wrote:
>
> Thanks for your answer
> In between I tried to call directly MatLUFactorSymbolic then
> MatLUFactorNumeric to avoid MatGetOrdering, and the code fails later (in the
> cal
I guess that the partioning is fixed, as two results can also differ
when I call two successive solves, where matrix and rhs vector and
everything is identical.
In that case the factorization/partitioning is reused by MUMPS and
only the solving phase is executed twice, which alone leads to
slightly
Thanks for your answer
In between I tried to call directly MatLUFactorSymbolic then
MatLUFactorNumeric to avoid MatGetOrdering, and the code fails later (in
the call to SuperLu routine pdgssvx).
I would prefer to use PETSc for the computation of the nullbasis: the input
matrix is a PETSc "Mat" (MP
26 matches
Mail list logo