Re: [petsc-users] ParMetis error

2020-02-19 Thread Smith, Barry F. via petsc-users
Mark, It may be best to try jumping to the latest PETSc 3.12. ParMETIS had some difficult issues with matrices we started to provide to it in the last year and the code to handle the problems may not be in 3.11 If the problem persists in 3.12 then I would start with checking with v

Re: [petsc-users] Matrix-free method in PETSc

2020-02-18 Thread Smith, Barry F. via petsc-users
il update function, > assuming the result will be passed into the matrix operation automatically? > > You update the information in the context associated with the shell matrix. > No need to destroy it. > > Thanks, > > Matt > > Thanks, > Yuyun > &

Re: [petsc-users] Matrix-free method in PETSc

2020-02-17 Thread Smith, Barry F. via petsc-users
side global variables!) > > After I create such a shell matrix, can I use it like a regular matrix in KSP > and utilize preconditioners? > > Thanks! > Yuyun > From: petsc-users on behalf of Yuyun Yang > > Sent: Sunday, February 16, 2020 3:12 AM > To: Smith, Ba

Re: [petsc-users] Matrix-free method in PETSc

2020-02-15 Thread Smith, Barry F. via petsc-users
Yuyun, If you are speaking about using a finite difference stencil on a structured grid where you provide the Jacobian vector products yourself by looping over the grid doing the stencil operation we unfortunately do not have exactly that kind of example. But it is actually not diff

Re: [petsc-users] Crash caused by strange error in KSPSetUp

2020-02-13 Thread Smith, Barry F. via petsc-users
Richard, It is likely that for these problems some of the integers become too large for the int variable to hold them, thus they overflow and become negative. You should make a new PETSC_ARCH configuration of PETSc that uses the configure option --with-64-bit-indices, this will c

Re: [petsc-users] DMUMPS_LOAD_RECV_MSGS

2020-02-13 Thread Smith, Barry F. via petsc-users
Given the 2040 either you or MUMPS is running out of communicators. Do you use your own communicators in your code and are you freeing them when you don't need them? If it is not your code then it is MUMPs that is running out and you should contact them directly RECURSIVE SUBROU

Re: [petsc-users] Implementing the Sherman Morisson formula (low rank update) in petsc4py and FEniCS?

2020-02-10 Thread Smith, Barry F. via petsc-users
Note that you can add -snes_fd_operator and get Newton's method with a preconditioner built from the Picard matrix. Barry > On Feb 10, 2020, at 11:16 AM, Jed Brown wrote: > > Olek Niewiarowski writes: > >> Barry, >> Thank you for your help and detailed suggestions. I will try to impl

Re: [petsc-users] What is the right way to implement a (block) Diagonal ILU as PC?

2020-02-10 Thread Smith, Barry F. via petsc-users
ion methods. But I will probably need to look through a number of > literatures before laying my hands on those (or bother you with more > questions!). Anyway, thanks again for your kind help. > > > All the best, > Hao > >> On Feb 8, 2020, at 8:02 AM, Smith, Barr

Re: [petsc-users] What is the right way to implement a (block) Diagonal ILU as PC?

2020-02-07 Thread Smith, Barry F. via petsc-users
.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 84 1.0 5.2800e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSetUp 4 1.0 1.4765e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > KSPSolve 1 1.0 1.8514e+00 1.0 4.31e+09 1.0 0.0e+00 0.0e+00 > 0.0e+00 85

Re: [petsc-users] What is the right way to implement a (block) Diagonal ILU as PC?

2020-02-06 Thread Smith, Barry F. via petsc-users
4 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 75 1.0 1.8462e-01 1.0 2.51e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 3 3 0 0 0 3 3 0 0 0 1360 > KSPSetUp 4 1.0 1.1341e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.

Re: [petsc-users] Implementing the Sherman Morisson formula (low rank update) in petsc4py and FEniCS?

2020-02-06 Thread Smith, Barry F. via petsc-users
anks, > > Alexander (Olek) Niewiarowski > PhD Candidate, Civil & Environmental Engineering > Princeton University, 2020 > Cell: +1 (610) 393-2978 > From: Matthew Knepley > Sent: Thursday, February 6, 2020 5:33 > To: Olek Niewiarowski > Cc: Smith, Barry F. ; pets

Re: [petsc-users] What is the right way to implement a (block) Diagonal ILU as PC?

2020-02-05 Thread Smith, Barry F. via petsc-users
pport unpreconditioned in > LEFT/RIGHT (either way). Is it possible to do that (output unpreconditioned > residual) in PETSc at all? -ksp_monitor_true_residualYou can also run GMRES (and some other methods) with right preconditioning, -ksp_pc_side right then the residual computed is by the a

Re: [petsc-users] Implementing the Sherman Morisson formula (low rank update) in petsc4py and FEniCS?

2020-02-05 Thread Smith, Barry F. via petsc-users
lp you do this. > > Thanks, > > Matt > > On Wed, Feb 5, 2020 at 1:36 AM Smith, Barry F. via petsc-users > wrote: > >I am not sure of everything in your email but it sounds like you want to > use a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u).

Re: [petsc-users] Triple increasing of allocated memory during KSPSolve calling(GMRES preconditioned by ASM)

2020-02-05 Thread Smith, Barry F. via petsc-users
o additional memory? -ksp_type gmres or bcgs -pc_type jacobi (the sor won't work because the zero diagonals) It will not be good preconditioner. Are you sure you don't have additional memory for the preconditioner? A good preconditioner might require up to 5 to 6 the

Re: [petsc-users] Implementing the Sherman Morisson formula (low rank update) in petsc4py and FEniCS?

2020-02-04 Thread Smith, Barry F. via petsc-users
I am not sure of everything in your email but it sounds like you want to use a "Picard" iteration to solve [K(u)−kaaT]Δu=−F(u). That is solve A(u^{n}) (u^{n+1} - u^{n}) = F(u^{n}) - A(u^{n})u^{n} where A(u) = K(u) - kaaT PETSc provides code to this with SNESSetPicard() (see the manual p

Re: [petsc-users] Required structure and attrs for MatLoad from hdf5

2020-02-04 Thread Smith, Barry F. via petsc-users
I think this is a Python-Matlab question, not specifically related to PETSc in any way. Googling python matrix hdf5 matlab there are mentions of h5py library that can be used to write out sparse matrices in Matlab HDF5 format. Which could presumably be read by PETSc. PETSc can also read in th

Re: [petsc-users] What is the right way to implement a (block) Diagonal ILU as PC?

2020-02-04 Thread Smith, Barry F. via petsc-users
> On Feb 4, 2020, at 12:41 PM, Hao DONG wrote: > > Dear all, > > > I have a few questions about the implementation of diagonal ILU PC in PETSc. > I want to solve a very simple system with KSP (in parallel), the nature of > the system (finite difference time-harmonic Maxwell) is probably no

Re: [petsc-users] Triple increasing of allocated memory during KSPSolve calling(GMRES preconditioned by ASM)

2020-02-04 Thread Smith, Barry F. via petsc-users
and it form logs, it is required 459 MB and 52 MB for matrix and > vector storage respectively. After summing of all objects for which memory is > allocated we get only 517 MB. > > Thank you again for your time! Have a nice day. > > Kind regards, > Dmitry > >

Re: [petsc-users] Triple increasing of allocated memory during KSPSolve calling(GMRES preconditioned by ASM)

2020-02-03 Thread Smith, Barry F. via petsc-users
GMRES also can by default require about 35 work vectors if it reaches the full restart. You can set a smaller restart with -ksp_gmres_restart 15 for example but this can also hurt the convergence of GMRES dramatically. People sometimes use the KSPBCGS algorithm since it does not require all

Re: [petsc-users] Running moose/scripts/update_and_rebuild_petsc.sh on HPC

2020-01-31 Thread Smith, Barry F. via petsc-users
You might find this option useful. --with-packages-download-dir= Skip network download of package tarballs and locate them in specified dir. If not found in dir, print package URL - so it can be obtained manually. This generates a list of URLs to download so you don't need to

Re: [petsc-users] Running moose/scripts/update_and_rebuild_petsc.sh on HPC

2020-01-31 Thread Smith, Barry F. via petsc-users
https://gitlab.com/petsc/petsc/-/merge_requests/2494 Will only turn off the hyper batch build if it is a KNL system. Will be added to maint branch Baryr > On Jan 31, 2020, at 11:58 AM, Tomas Mondragon > wrote: > > Hypre problem resolved. PETSc commit 05f86fb made in August 05, 201

Re: [petsc-users] Product of matrix row times a vector

2020-01-30 Thread Smith, Barry F. via petsc-users
MatGetSubMatrix() and then do the product on the sub matrix then VecSum Barry > On Jan 30, 2020, at 3:02 PM, Jeremy Theler wrote: > > Sorry if this is basic, but I cannot figure out how to do it in > parallel and I'd rather not say how I do it in single-processor mode > because I would

Re: [petsc-users] Fwd: Running moose/scripts/update_and_rebuild_petsc.sh on HPC

2020-01-30 Thread Smith, Barry F. via petsc-users
As Jed would say --with-lgrind=0 > On Jan 30, 2020, at 2:49 PM, Fande Kong wrote: > > > Hi All, > > It looks like a bug for me. > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0". The > configuration log is attached. Any way to disable lgrind detection. > > Thanks

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-22 Thread Smith, Barry F. via petsc-users
igure solved > my problem. > So I attached the associated log files named as > configure_openblas_64-bit-indices.log and test_openblas_64-bit-indices.log > > > All operations were performed with barry/2020-01-15/support-default-integer-8 > version of PETSc. > > > Kin

Re: [petsc-users] error handling

2020-01-21 Thread Smith, Barry F. via petsc-users
ory"? > > Barry > > > > > > Thanks, > > Sam > > > > On Mon, Jan 20, 2020 at 4:06 PM Smith, Barry F. wrote: > > > > Sam, > > > > I am not sure what your goal is but PETSc error return codes are error > > r

Re: [petsc-users] error handling

2020-01-21 Thread Smith, Barry F. via petsc-users
because of the unknown error it could be that the releasing of the memory causes a real crash. Is your main concern when you use PETSc for a large problem and it errors because it is "out of memory"? Barry > > Thanks, > Sam > > On Mon, Jan 20, 2020 at 4:

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-21 Thread Smith, Barry F. via petsc-users
nload package OPENBLAS from: >>> git://https://github.com/xianyi/OpenBLAS.git >>> * If URL specified manually - perhaps there is a typo? >>> * If your network is disconnected - please reconnect and rerun ./configure >>> * Or perhaps you have a firewall blocking the download >&

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-21 Thread Smith, Barry F. via petsc-users
and use the configure option: > --download-openblas=/yourselectedlocation > Could not locate downloaded package OPENBLAS in > /cygdrive/d/Computational_geomechanics/installation/petsc-barry/arch-mswin-c-debug/externalpackages > > But I checked the last location (.../externalpac

Re: [petsc-users] error handling

2020-01-20 Thread Smith, Barry F. via petsc-users
Sam, I am not sure what your goal is but PETSc error return codes are error return codes not exceptions. They mean that something catastrophic happened and there is no recovery. Note that PETSc solvers do not return nonzero error codes on failure to converge etc. You call, for exam

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-20 Thread Smith, Barry F. via petsc-users
aborting MPI_COMM_WORLD (comm=0x4400), error 50152059, comm rank 0 > > error analysis - > > [0] on DESKTOP-R88IMOB > ./ex5f aborted the job. abort code 50152059 > > error analysis - > Completed test examples > > Kind regards, > Dmitry Melnichuk &

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-18 Thread Smith, Barry F. via petsc-users
Dmitry, I have completed and tested the branch barry/2020-01-15/support-default-integer-8 it is undergoing testing now https://gitlab.com/petsc/petsc/merge_requests/2456 Please give it a try. Note that MPI has no support for integer promotion so YOU must insure that any MPI calls

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-17 Thread Smith, Barry F. via petsc-users
) for functions */ > if (useFerr) { > - OutputFortranToken( fout, 7, "integer" ); > + OutputFortranToken( fout, 7, "PetscErrorCode" ); > OutputFortranToken( fout, 1, errArgNameParm); > } else if (is_function) { > OutputFortranToken( fout, 7, ArgTo

Re: [petsc-users] use superlu and hypre's gpu features through PETSc

2020-01-16 Thread Smith, Barry F. via petsc-users
That is superlu_dist and hypre. Yes, but both backends are rather primitive and will be a little struggle to use. For superlu_dist you need to get the branch barry/fix-superlu_dist-py-for-gpus and rebase it against master I only recommend trying them if you are adventuresome. Not

Re: [petsc-users] DMDA Error

2020-01-16 Thread Smith, Barry F. via petsc-users
Are you increasing your problem size with the number of ranks or same size problem? It could also be out of memory issues. No error message is printed; which is not standard. It should print first a message why it failed. Are you sure all the libraries were rebuilt. R

Re: [petsc-users] SNESSetOptionsPrefix usage

2020-01-16 Thread Smith, Barry F. via petsc-users
; > Le mer. 15 janv. 2020 à 18:56, Matthew Knepley a écrit : > I think that Mark is suggesting that no command line arguments are getting in. > > Timothee, > > Can you use any command line arguments? > > Thanks, > > Matt > > On Wed, Jan 15

Re: [petsc-users] Solver compilation with 64-bit version of PETSc under Windows 10 using Cygwin

2020-01-15 Thread Smith, Barry F. via petsc-users
Working on it now; may be doable > On Jan 15, 2020, at 11:55 AM, Matthew Knepley wrote: > > On Wed, Jan 15, 2020 at 10:26 AM Дмитрий Мельничук > wrote: > > And I'm not sure why you are having to use PetscInt for ierr. All PETSc > > routines should be suing 'PetscErrorCode for ierr' > >

Re: [petsc-users] SNESSetOptionsPrefix usage

2020-01-15 Thread Smith, Barry F. via petsc-users
Should still work. Run in the debugger and put a break point in snessetoptionsprefix_ and see what it is trying to do Barry > On Jan 15, 2020, at 8:58 AM, Timothée Nicolas > wrote: > > Hi, thanks for your answer, > > I'm using Petsc version 3.10.4 > > Timothée > > Le mer. 15 janv. 20

Re: [petsc-users] SNESSetOptionsPrefix usage

2020-01-15 Thread Smith, Barry F. via petsc-users
Works for me with PETSc 12, what version of PETSc are you using? program main #include use petsc implicit none ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - PetscErrorCode ierr SNES snes1 call PetscInitialize(PETSC_NULL_C

Re: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2

2020-01-10 Thread Smith, Barry F. via petsc-users
ges in parmetis between the two PETSc releases are these below, > but I don’t see how they could cause issues > > kl-18448:pkg-parmetis szampini$ git log -2 > commit ab4fedc6db1f2e3b506be136e3710fcf89ce16ea (HEAD -> master, tag: > v4.0.3-p5, origin/master, origin/dalcinl/rando

Re: [petsc-users] petsc4py mpi matrix size

2020-01-10 Thread Smith, Barry F. via petsc-users
Yes, with, for example, MATMPAIJ, the matrix entries are distributed among the processes; first verify that you are using a MPI matrix, not Seq, since Seq will keep an entire copy on each process. But the parallel matrices do come with some overhead for meta data. So for small matrices li

Re: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2

2020-01-09 Thread Smith, Barry F. via petsc-users
9 2019 +0300 GKlib: Use gk_randint32() to define the RandomInRange() macro On Jan 9, 2020, at 4:31 AM, Smith, Barry F. via petsc-users wrote: This is extremely worrisome: ==23361== Use of uninitialised value of size 8 ==23361==at 0x847E939: gk_randint64 (random.c:99) =

Re: [petsc-users] set petsc matrix using input array

2020-01-09 Thread Smith, Barry F. via petsc-users
Since PETSc does not use that format there, of course, has to be a time when you have duplicate memory. Barry > On Jan 9, 2020, at 12:47 PM, Sam Guo wrote: > > Dear PETSc dev team, >Suppose I have the matrix already in triplet format int int[] I, int[] J, > double[] A, Is possi

Re: [petsc-users] Problems with PCMGSetLevels and MatNullSpaceCreate in Fortran

2020-01-08 Thread Smith, Barry F. via petsc-users
https://www.mcs.anl.gov/petsc/documentation/changes/38.html > On Jan 8, 2020, at 9:22 PM, TAY wee-beng wrote: > > Hi, > > After upgrading to the newer ver of PETSc 3.8.3, I got these error during > compile in VS2008 with Intel Fortran: > > call PCMGSetLevels(pc,mg_lvl,PETSC_NULL_OBJECT,ierr)

Re: [petsc-users] killed 9 signal after upgrade from petsc 3.9.4 to 3.12.2

2020-01-08 Thread Smith, Barry F. via petsc-users
This is extremely worrisome: ==23361== Use of uninitialised value of size 8 ==23361==at 0x847E939: gk_randint64 (random.c:99) ==23361==by 0x847EF88: gk_randint32 (random.c:128) ==23361==by 0x81EBF0B: libparmetis__Match_Global (in /space/hpc-home/trianas/petsc-3.12.3/arch-linux2-c-

Re: [petsc-users] PetscOptionsGetBool error

2020-01-08 Thread Smith, Barry F. via petsc-users
Try the debugger. > On Jan 8, 2020, at 4:01 PM, Anthony Paul Haas wrote: > > Hello, > > I am using Petsc 3.7.6.0. with Fortran code and I am getting a segmentation > violation for the following line: > > call > PetscOptionsGetBool(PETSC_NULL_CHARACTER,"-use_mumps_lu",flg_mumps_lu,flg,se

Re: [petsc-users] Problems applying multigrid

2020-01-08 Thread Smith, Barry F.
Yeah, this is an annoying feature of DMDA and PCMG in PETSc. Some coarse grid ranges and particular parallel layouts won't work with geometric multigrid. You are using 314 on the coarse and 628 on the fine grid. Try changing the them by 1 and start with one process. Barry > On Jan 8

Re: [petsc-users] [petsc-maint] (no subject)

2020-01-07 Thread Smith, Barry F.
> On Jan 7, 2020, at 8:59 AM, Mark Adams wrote: > > I’m not sure what the compilers, and C++ are doing here > > On Tue, Jan 7, 2020 at 9:17 AM Кудров Илья wrote: > However, after configuring > > cout<<1. + 1.*PETSC_i< > outputs (1, 0) instead of (1, 1). Where after configure? PETSC_

Re: [petsc-users] TS shallow reset

2020-01-07 Thread Smith, Barry F.
Do you reset the initial tilmestep? Otherwise the second solve thinks it is at the end. Also you may need to reset the iteration number Something like ierr = TSSetTime(appctx->ts, 0);CHKERRQ(ierr); ierr = TSSetStepNumber(appctx->ts, 0);CHKERRQ(ierr); ierr = TSSetTimeStep(appctx->ts,

Re: [petsc-users] possible memory leak

2019-12-24 Thread Smith, Barry F.
h one (or which revision), I wil check it. > > > Gesendet: Mittwoch, 25. Dezember 2019 um 00:53 Uhr > Von: "Smith, Barry F." > An: "Marius Buerkle" > Cc: "Mark Adams" , "petsc-usersmcs.anl.gov" > > Betreff: Re: [petsc-users]

Re: [petsc-users] possible memory leak

2019-12-24 Thread Smith, Barry F.
There are no leaks but it appears what is happening is that rather than recycle the memory PETSc is returning to the system the system is generating new space as needed. Since the old PETSc pages are never used again this should be harmless. Barry > On Dec 24, 2019, at 9:47 AM, Marius

Re: [petsc-users] Strange Memory Increase in PCHYPRE BOOMERAMG

2019-12-18 Thread Smith, Barry F.
Thank you for the full and detailed report. The memory leak could be anywhere but my guess is it is in the interface between PETSc and Hyper. The first thing to check is if PETSc memory keeps increasing. The simplest way to do this is run your code 3 independent times with -malloc_debug

Re: [petsc-users] Bad Termination error in MatPartitioningApply with ParMETIS

2019-12-18 Thread Smith, Barry F.
Can you please send use the exact code and data file that causes the crash? And any options. There are bugs in Metis/Parmetis that we need to track down and eliminate since it is so central to PETSc's work flow. Barry > On Dec 18, 2019, at 4:21 AM, Eda Oktay wrote: > > Hi all, >

Re: [petsc-users] MatView to disk for Elemental

2019-12-10 Thread Smith, Barry F.
uerkle wrote: > > Hi, > > Is it actually possible to submit a pull (merge) request ? I followed the > petsc wiki but this didn't work. > > Best > Marius > > > Gesendet: Donnerstag, 05. Dezember 2019 um 07:45 Uhr > Von: "Marius Buerkle"

Re: [petsc-users] CMake error in PETSc

2019-12-09 Thread Smith, Barry F.
PETSC_ARCH to find the configuration file AND use the compilers from the configuration file. It works for me, please let me know if you have trouble. Barry > On Dec 8, 2019, at 10:38 PM, Yingjie Wu wrote: > > Thank you very much for your help. > My programs are as follow >

Re: [petsc-users] CMake error in PETSc

2019-12-08 Thread Smith, Barry F.
There is something missing in the cmake process that is causing needed libraries not to be linked. Please email your program and your CMake stuff (files you use) so we can reproduce the problem and find a fix. Barry > On Dec 8, 2019, at 10:06 PM, Yingjie Wu wrote: > > Hi, > Thank

Re: [petsc-users] petsc under cygwin

2019-12-05 Thread Smith, Barry F.
an build and run C programs > with MPI. > > I have -with-log=0 because there is an overhead if many small objects > are created, which is my case. > > As for the fortran, I will remove --with-fortran=0 > > On 12/5/2019 11:53 PM, Smith, Barry F. wrote: >> Can you

Re: [petsc-users] petsc under cygwin

2019-12-05 Thread Smith, Barry F.
Can you actually build and run C++ programs with the MPI? Executing: mpicxx -o /tmp/petsc-Llvze6/config.setCompilers/conftest.exe -fopenmp -fPIC /tmp/petsc-Llvze6/config.setCompilers/conftest.o Possible ERROR while running linker: exit code 1 stderr: /usr/lib/gcc/x86_64-pc-cygwin/7.4.0/

Re: [petsc-users] Updating TS solution outside PETSc

2019-12-05 Thread Smith, Barry F.
set it to the value it would get if it wasn't an edge, then the > derivative isn't preserved anymore. > > This is where I get stuck. > > Ellen > > > On 12/5/19 10:16 AM, Smith, Barry F. wrote: >> >> Are you using cell-centered or vertex

Re: [petsc-users] dof of DMDA & DMPlex

2019-12-05 Thread Smith, Barry F.
Hmm, for DMDA and DMStag is should not have a limit (certain ranges of values are better optimized than others but more optimizations may be done). For DMPLEX in theory again it should be what ever you like (again larger values may require more optimization in our code to get really grea

Re: [petsc-users] Updating TS solution outside PETSc

2019-12-05 Thread Smith, Barry F.
Are you using cell-centered or vertex centered discretization ( makes a slight difference)? Our model is to use DM_BOUNDARY_MIRROR DMBoundaryType. This means that u_first_real_grid_point - u_its_ghost_point = 0 (since DMGlobalToLocal will automatically put into the physical ghost locat

Re: [petsc-users] Could not find a CXX preprocessor on Catalina

2019-12-05 Thread Smith, Barry F.
Hmm, MPICH and OpenMPI have also passed this info in there compilers; perhaps this is a newer version clang that no longer tolerates these options I think we need to strip out those options as a guess > On Dec 4, 2019, at 2:18 PM, Balay, Satish wrote: > > Yes - this is a link time opti

Re: [petsc-users] MatView to disk for Elemental

2019-12-04 Thread Smith, Barry F.
It still > outputs "Elemental matrix (explicit ordering)" to StdOut which is kinda > annoying, is there anyway to turn this off? > > > Von: "Smith, Barry F." > An: "Marius Buerkle" > Cc: "petsc-users@mcs.anl.gov" > Betreff: Re:

Re: [petsc-users] (no subject)

2019-12-03 Thread Smith, Barry F.
will the following time steps reuse the Jacobian built > at the first time step? > > Best, > Li > > > > On Tue, Dec 3, 2019 at 12:10 AM Smith, Barry F. wrote: > > > > On Dec 2, 2019, at 2:30 PM, Li Luo wrote: > > > > -snes_mf fails to converge

Re: [petsc-users] MatView to disk for Elemental

2019-12-03 Thread Smith, Barry F.
sorry about this. The numerical values between C and Fortran got out of sync. I've attached a patch file you can apply with patch -p1 < format.patch or you can use the branch https://gitlab.com/petsc/petsc/merge_requests/2346 Barry > On Dec 3, 2019, at 1:10 AM, Marius Buerkle wrot

Re: [petsc-users] (no subject)

2019-12-02 Thread Smith, Barry F.
euse it forever. You can also try -snes_mf -snes_lag_jacobian -2 which should compute the Jacobian once, use that original one to build the preconditioner once and reuse the same preconditioner but use the matrix free to define the operator. Barry > > Regards, > Li >

Re: [petsc-users] (no subject)

2019-12-02 Thread Smith, Barry F.
378 > Number of rows 756 > 0 1188 > 1 1188 > 2 1188 > 3 1188 > 4 1188 > 5 1188 > ... > > Is this normal? > When using MCFD, is there any difference using mpiaij and mpibaij? > > Best, > Li > >

Re: [petsc-users] (no subject)

2019-12-01 Thread Smith, Barry F.
How many colors is it requiring? And how long is the MatGetColoring() taking? Are you running in parallel? The MatGetColoring() MATCOLORINGSL uses a sequential coloring algorithm so if your matrix is large and parallel the coloring will take a long time. The parallel colorings are MATCOLO

Re: [petsc-users] Weird behaviour of PCGAMG in coupled poroelasticity

2019-11-29 Thread Smith, Barry F.
I would first run with -ksp_monitor_true_residual -ksp_converged_reason to make sure that those "very fast" cases are actually converging in those runs also use -ksp_view to see what the GMAG parameters are. Also use the -info option to have it print details on the solution process. Ba

Re: [petsc-users] Outputting matrix for viewing in matlab

2019-11-29 Thread Smith, Barry F.
> On Nov 28, 2019, at 7:07 PM, baikadi pranay wrote: > > Hello PETSc users, > > I have a sparse matrix built and I want to output the matrix for viewing in > matlab. However i'm having difficulty outputting the matrix. I am writing my > program in Fortran90 and I've included the following

Re: [petsc-users] Memory optimization

2019-11-26 Thread Smith, Barry F.
> I am basically trying to solve a finite element problem, which is why in 3D I > have 7 non-zero diagonals that are quite farm apart from one another. In 2D I > only have 5 non-zero diagonals that are less far apart. So is it normal that > the set up time is around 400 times greater in the 3D

Re: [petsc-users] petsc without MPI

2019-11-25 Thread Smith, Barry F.
I agree this is confusing. https://gitlab.com/petsc/petsc/merge_requests/2331 the flag PETSC_HAVE_MPI will no longer be set when MPI is not used (only MPIUNI is used). Barry The code API still has MPI* in it with MPI but they are stubs that just handle the sequential code and do not r

Re: [petsc-users] Domain decomposition using DMPLEX

2019-11-25 Thread Smith, Barry F.
"No, I have an unstructured mesh that increases in resolution away from the center of the cuboid. See Figure: 5 in the ArXiv paper https://arxiv.org/pdf/1907.02604.pdf for a slice through the midplane of the cuboid. Given this type of mesh, will dmplex do a cuboidal domain decomposition?"

Re: [petsc-users] how to set the matrix with the new cell ordering with metis

2019-11-24 Thread Smith, Barry F.
You can possibly use the PETSc object AO (see AOCreate()) to manage the reordering. The non-contiguous order you start with is the application ordering and the new contiguous ordering is the petsc ordering. Note you will likely need to reorder the cell vertex or edge numbers as well. Bar

Re: [petsc-users] solve problem with pastix

2019-11-19 Thread Smith, Barry F. via petsc-users
rch-linux2-cxx-opt/externalpackages/pastix_5.2.3/src/sopalin/src/sopalin_thread.c:548: > undefined reference to `hwloc_bitmap_asprintf' > > Any idea is appreciated. I can attach configure.log as needed. > > Giang > > > On Thu, Nov 7, 2019 at 12:18 AM hg wrote: > Hi Ba

Re: [petsc-users] problem downloading "fix-syntax-for-nag.tar.gx"

2019-11-19 Thread Smith, Barry F. via petsc-users
For a while I had put in an incorrect URL in the download location. Perhaps you are using PETSc 3.12.0 and need to use 3.12.1 from https://www.mcs.anl.gov/petsc/download/index.html Otherwise please send configure.log > On Nov 19, 2019, at 4:40 AM, Santiago Andres Triana via petsc-

Re: [petsc-users] Question about changing time step during calculation

2019-11-17 Thread Smith, Barry F. via petsc-users
> On Nov 17, 2019, at 5:32 PM, Zhang, Hong via petsc-users > wrote: > > TSSetTimeStep() > > https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/TS/TSSetTimeStep.html#TSSetTimeStep > > If you want to decide the step size by yourself, make sure that the > adaptivity is turned off, e

Re: [petsc-users] ts behavior question

2019-11-12 Thread Smith, Barry F. via petsc-users
ld be done at the beginning of the RHS function? > > On Tue, Nov 12, 2019 at 3:41 PM Smith, Barry F. wrote: > > > > On Nov 12, 2019, at 2:09 PM, Gideon Simpson > > wrote: > > > > So this might be a resolution/another question. Part of the reason to us

Re: [petsc-users] ts behavior question

2019-11-12 Thread Smith, Barry F. via petsc-users
you should not be changing values and thus should use read(). > > On Tue, Nov 12, 2019 at 10:43 AM Smith, Barry F. wrote: > > For any vector you only read you should use the read version. > > Sometimes the vector may not be locked and hence the other routine can be &

Re: [petsc-users] ts behavior question

2019-11-12 Thread Smith, Barry F. via petsc-users
ll do the check. If you call it on local ghosted vectors it doesn't check if the vector is locked since the ghosted version is a copy of the true locked vector. Barry > > On Tue, Nov 12, 2019 at 12:33 AM Smith, Barry F. wrote: > > > > On Nov 11, 2019, at 7:00 PM, Gide

Re: [petsc-users] ts behavior question

2019-11-11 Thread Smith, Barry F. via petsc-users
> On Nov 11, 2019, at 7:00 PM, Gideon Simpson via petsc-users > wrote: > > I noticed that when I am solving a problem with the ts and I am *not* using a > da, if I want to use an implicit time stepping routine: > 1. I have to explicitly provide the Jacobian Yes > 2. When I do provide th

Re: [petsc-users] Line search Ended due to ynorm, nondeterministic stagnation

2019-11-11 Thread Smith, Barry F. via petsc-users
Mark, What are you using for KSP rtol ? It looks like 1.e-1 from > 0 KSP Residual norm 2.654593713313e-03 > ... > 41 KSP Residual norm 2.515907124549e-04 What about SNES stol, are you setting that? > Line search: Ended due to ynorm < stol*xnorm (1.047067861804e-0

Re: [petsc-users] nondeterministic behavior of MUMPS when filtering out zero rows and columns

2019-11-07 Thread Smith, Barry F. via petsc-users
Make sure you have the latest PETSc and MUMPS installed; they have fixed bugs in MUMPs over time. Hanging locations are best found with a debugger; there is really no other way. If you have a parallel debugger like DDT use it. If you don't you can use the PETSc option -start_in_debugger

Re: [petsc-users] solve problem with pastix

2019-11-06 Thread Smith, Barry F. via petsc-users
setaffinity: Invalid argument only happens when I launch the job with > sbatch. Running without scheduler is fine. I think this has something to do > with pastix. > > Giang > > > On Wed, Nov 6, 2019 at 4:37 AM Smith, Barry F. wrote: > > Google finds this > https:/

Re: [petsc-users] solve problem with pastix

2019-11-05 Thread Smith, Barry F. via petsc-users
Google finds this https://gforge.inria.fr/forum/forum.php?thread_id=32824&forum_id=599&group_id=186 > On Nov 5, 2019, at 7:01 PM, Matthew Knepley via petsc-users > wrote: > > I have no idea. That is a good question for the PasTix list. > > Thanks, > > Matt > > On Tue, Nov 5, 201

Re: [petsc-users] --with-64-bit-indices=1

2019-11-04 Thread Smith, Barry F. via petsc-users
is pretty large (mesh is 12,001x 301). I am also attaching the > output of the code in case that could provide more info. Do you know how I > should proceed? > > Thanks, > > Anthony > > On Mon, Nov 4, 2019 at 1:46 PM Smith, Barry F. wrote: > > > > &g

Re: [petsc-users] --with-64-bit-indices=1

2019-11-04 Thread Smith, Barry F. via petsc-users
> On Nov 4, 2019, at 2:14 PM, Anthony Paul Haas via petsc-users > wrote: > > Hello, > > I ran into an issue while using Mumps from Petsc. I got the following error > (see below please). Somebody suggested that I compile Petsc with > --with-64-bit-indices=1. Will that suffice? Current

Re: [petsc-users] doubts on VecScatterCreate

2019-11-04 Thread Smith, Barry F. via petsc-users
It works for me. Please send a complete code that fails. > On Nov 3, 2019, at 11:41 PM, Emmanuel Ayala via petsc-users > wrote: > > Hi everyone, thanks in advance. > > I have three parallel vectors: A, B and C. A and B have different sizes, and > C must be contain these two vectors (MatL

Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Smith, Barry F. via petsc-users
> On Nov 1, 2019, at 4:50 PM, Zhang, Junchao via petsc-users > wrote: > > I know nothing about Vec FFTW, You are lucky :-) > but if you can provide hdf5 files in your test, I will see if I can reproduce > it. > --Junchao Zhang > > > On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-us

Re: [petsc-users] Do the guards against calling MPI_Comm_dup() in PetscCommDuplicate() apply with Fortran?

2019-11-01 Thread Smith, Barry F. via petsc-users
that the older OpenMPI worked fine. Barry > >> Am 01.11.2019 um 16:24 schrieb Smith, Barry F. : >> >> >> Certain OpenMPI versions have bugs where even when you properly duplicate >> and then free communicators it eventually "runs out of communicators&quo

Re: [petsc-users] Do the guards against calling MPI_Comm_dup() in PetscCommDuplicate() apply with Fortran?

2019-11-01 Thread Smith, Barry F. via petsc-users
Certain OpenMPI versions have bugs where even when you properly duplicate and then free communicators it eventually "runs out of communicators". This is a definitely a bug and was fixed in later OpenMPI versions. We wasted a lot of time tracking down this bug in the past. By now it is an o

Re: [petsc-users] PETSc 3.12 with .f90 files

2019-10-29 Thread Smith, Barry F. via petsc-users
The problem is that this change DOES use the preprocessor on the f90 file, does it not? We need a rule that does not use the preprocessor. Barry > On Oct 29, 2019, at 10:50 AM, Matthew Knepley via petsc-users > wrote: > > On Tue, Oct 29, 2019 at 11:38 AM Randall Mackie wrote: > Hi M

Re: [petsc-users] Correct way to access a sequential version of a DMDA Vec?

2019-10-27 Thread Smith, Barry F. via petsc-users
This won't work as written for two reasons 1) the VecScatterCreateToAll() will just concatenate the values from each process in a long array on each process, thus the resulting values will be "scrambled" and it won't be practical to access the values (because the parallel layout of DMDA ve

Re: [petsc-users] BAIJCUSPARSE?

2019-10-25 Thread Smith, Barry F. via petsc-users
You would need to investigate if the Nvidia cuSPARSE package supports such a format. If it does then it would be reasonably straightforward for you to hook up the required interface from PETSc. If it does not then it is a massive job to provide such code and you should see if any open source

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-25 Thread Smith, Barry F. via petsc-users
st line. But I was probably mistaken - if it was inserted it would have > been > row 0: (0, 1.), (9, 0.) > > on the first line instead? > > Thibaut > > > > On 25/10/2019 14:41, Smith, Barry F. wrote: >> >> >> > On Oct 24, 2019, at 5:09 AM,

Re: [petsc-users] Block preconditioning for 3d problem

2019-10-25 Thread Smith, Barry F. via petsc-users
will take your advice and look at reformulating my outer problem for a SNES > (line search) solve. > > Cheers, Dave. > > On Fri, 25 Oct. 2019, 2:52 am Smith, Barry F., wrote: > > If you are "throwing things" away in computing the Jacobian then any > expec

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-25 Thread Smith, Barry F. via petsc-users
> On Oct 24, 2019, at 5:09 AM, Thibaut Appel wrote: > > Hi Matthew, > > Thanks for having a look, your example runs just like mine in Fortran. > > In serial, the value (0.0,0.0) was inserted whereas it shouldn't have. I'm sorry, I don't see this for the serial case: $ petscmpiexec -n 1 ./e

Re: [petsc-users] Block preconditioning for 3d problem

2019-10-24 Thread Smith, Barry F. via petsc-users
rew Newton solvers, especially when tackling problems with potentially interesting nonlinearities. Barry > On Oct 14, 2019, at 8:18 PM, Dave Lee wrote: > > Hi Barry, > > I've replied inline: > > On Mon, Oct 14, 2019 at 4:07 PM Smith, Barry F. wrote: > > T

Re: [petsc-users] VI: RS vs SS

2019-10-24 Thread Smith, Barry F. via petsc-users
See bottom > On Oct 14, 2019, at 1:12 PM, Justin Chang via petsc-users > wrote: > > It might depend on your application, but for my stuff on maximum principles > for advection-diffusion, I found RS to be much better than SS. Here’s the > paper I wrote documenting the performance numbers I c

Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-23 Thread Smith, Barry F. via petsc-users
Thanks for the test case. There is a bug in the code; the check is not in the correct place. I'll be working on a patch for 3.12 Barry > On Oct 23, 2019, at 8:31 PM, Matthew Knepley via petsc-users > wrote: > > On Tue, Oct 22, 2019 at 1:37 PM Thibaut Appel > wrote: > Hi both, > >

Re: [petsc-users] matsetvalueslocal for aijcusparse matrix

2019-10-22 Thread Smith, Barry F. via petsc-users
; and "Stash has 0 entries, uses 0 mallocs." > > If I run the same code with -test_mat_type aijcusparse, it takes forever to > finish step 10. Does this step really involve moving data from host to > devices? Do I need to have more changes to use aijcusparse other than ju

  1   2   3   4   5   6   7   8   9   10   >