Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-23 Thread Smith, Barry F. via petsc-users


   Thanks for the test case. There is a bug in the code; the check is not in 
the correct place. I'll be working on a patch for 3.12

   Barry


> On Oct 23, 2019, at 8:31 PM, Matthew Knepley via petsc-users 
>  wrote:
> 
> On Tue, Oct 22, 2019 at 1:37 PM Thibaut Appel  
> wrote:
> Hi both,
> 
> Please find attached a tiny example (in Fortran, sorry Matthew) that - I 
> think - reproduces the problem we mentioned.
> 
> Let me know.
> 
> 
> Okay, I converted to C so I could understand, and it runs fine for me:
> 
> master *:~/Downloads/tmp$ PETSC_ARCH=arch-master-complex-debug make main
> /PETSc3/petsc/bin/mpicc -o main.o -c -Wall -Wwrite-strings 
> -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector 
> -Qunused-arguments -fvisibility=hidden -g3   
> -I/PETSc3/petsc/petsc-dev/include 
> -I/PETSc3/petsc/petsc-dev/arch-master-complex-debug/include 
> -I/PETSc3/petsc/include -I/opt/X11/include`pwd`/main.c
> /PETSc3/petsc/bin/mpicc -Wl,-multiply_defined,suppress -Wl,-multiply_defined 
> -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first 
> -Wl,-no_compact_unwind  -Wall -Wwrite-strings -Wno-strict-aliasing 
> -Wno-unknown-pragmas -fstack-protector -Qunused-arguments -fvisibility=hidden 
> -g3  -o main main.o 
> -Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -Wl,-rpath,/PETSc3/petsc/lib -L/PETSc3/petsc/lib -Wl,-rpath,/opt/X11/lib 
> -L/opt/X11/lib -Wl,-rpath,/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 
> -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 
> -Wl,-rpath,/usr/local/lib -L/usr/local/lib -lpetsc -lfftw3_mpi -lfftw3 
> -llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lchaco 
> -lparmetis -lmetis -ltriangle -lz -lX11 -lctetgen -lstdc++ -ldl -lmpichf90 
> -lpmpich -lmpich -lopa -lmpl -lpthread -lgfortran -lgcc_s.10.5 -lstdc++ -ldl
> master *:~/Downloads/tmp$ ./main
> After first assembly:
> Mat Object: 1 MPI processes
>   type: seqaij
> row 0: (0, 1. + 1. i)
> row 1: (1, 1. + 1. i)
> row 2: (2, 1. + 1. i)
> row 3: (3, 1. + 1. i)
> row 4: (4, 1. + 1. i)
> row 5: (5, 1. + 1. i)
> row 6: (6, 1. + 1. i)
> row 7: (7, 1. + 1. i)
> row 8: (8, 1. + 1. i)
> row 9: (9, 1. + 1. i)
> After second assembly:
> Mat Object: 1 MPI processes
>   type: seqaij
> row 0: (0, 1. + 1. i)
> row 1: (1, 1. + 1. i)
> row 2: (2, 1. + 1. i)
> row 3: (3, 1. + 1. i)
> row 4: (4, 1. + 1. i)
> row 5: (5, 1. + 1. i)
> row 6: (6, 1. + 1. i)
> row 7: (7, 1. + 1. i)
> row 8: (8, 1. + 1. i)
> row 9: (9, 1. + 1. i)
> row 0 col: 9 val: 0. + 0. i
> 
> I attach my code.  So then I ran your Fortran:
> 
> /PETSc3/petsc/bin/mpif90 -c -Wall -ffree-line-length-0 
> -Wno-unused-dummy-argument -Wno-unused-variable -g
> -I/PETSc3/petsc/petsc-dev/include 
> -I/PETSc3/petsc/petsc-dev/arch-master-complex-debug/include 
> -I/PETSc3/petsc/include -I/opt/X11/include  -o main2.o main2.F90
> /PETSc3/petsc/bin/mpif90 -Wl,-multiply_defined,suppress -Wl,-multiply_defined 
> -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first 
> -Wl,-no_compact_unwind  -Wall -ffree-line-length-0 -Wno-unused-dummy-argument 
> -Wno-unused-variable -g   -o main2 main2.o 
> -Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib 
> -Wl,-rpath,/PETSc3/petsc/lib -L/PETSc3/petsc/lib -Wl,-rpath,/opt/X11/lib 
> -L/opt/X11/lib -Wl,-rpath,/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 
> -L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0 
> -Wl,-rpath,/usr/local/lib -L/usr/local/lib -lpetsc -lfftw3_mpi -lfftw3 
> -llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lchaco 
> -lparmetis -lmetis -ltriangle -lz -lX11 -lctetgen -lstdc++ -ldl -lmpichf90 
> -lpmpich -lmpich -lopa -lmpl -lpthread -lgfortran -lgcc_s.10.5 -lstdc++ -ldl
> master *:~/Downloads/tmp$ ./main2
>  After first assembly:
> Mat Object: 1 MPI processes
>   type: seqaij
> row 0: (0, 1.) 
> row 1: (1, 1.) 
> row 2: (2, 1.) 
> row 3: (3, 1.) 
> row 4: (4, 1.) 
> row 5: (5, 1.) 
> row 6: (6, 1.) 
> row 7: (7, 1.) 
> row 8: (8, 1.) 
> row 9: (9, 1.) 
>  After second assembly:
> Mat Object: 1 MPI processes
>   type: seqaij
> row 0: (0, 1.) 
> row 1: (1, 1.) 
> row 2: (2, 1.) 
> row 3: (3, 1.) 
> row 4: (4, 1.) 
> row 5: (5, 1.) 
> row 6: (6, 1.) 
> row 7: (7, 1.) 
> row 8: (8, 1.) 
> row 9: (9, 1.) 
>  row:  0 col:  9 val:  0.00E+00  0.00E+00
> 
> I am not seeing an error. Am I not running it correctly?
> 
>   Thanks,
> 
>  MAtt 
> Thibaut
> 
> 
> 
> On 22/10/2019 17:48, Matthew Knepley wrote:
>> On Tue, Oct 22, 2019 at 12:43 PM Thibaut Appel via petsc-users 
>>  wrote:
>> Hi Hong,
>> 
>> Thank you for having a look, I 

Re: [petsc-users] TS_SSP implementation for co-dependent variables

2019-10-23 Thread Jed Brown via petsc-users
Manuel Valera  writes:

> Yes, all of that sounds correct to me,
>
> No I haven't tried embedding the column integral into the RHS, right now I
> am unable to think how to do this without the solution of the previous
> intermediate stage. Any ideas are welcome,

Do you have some technical notes on your present formulation?  I think
it just amounts to performing the integration and then evaluating the
differential operator using results of that integral.

> Thanks,
>
> On Wed, Oct 9, 2019 at 4:18 PM Jed Brown  wrote:
>
>> Manuel Valera  writes:
>>
>> > Sorry, I don't follow this last email, my spatial discretization is
>> fixed,
>> > the problem is caused by the choice of vertical coordinate, in this case
>> > sigma, that calls for an integration of the hydrostatic pressure to
>> correct
>> > for the right velocities.
>>
>> Ah, fine.  To phrase this differently, you are currently solving an
>> integro-differential equation.  With an explicit integrator, you should
>> be able to embed that in the RHS function.  With an implicit integrator,
>> that causes the Jacobian to lose sparsity (the column integral is dense
>> coupling) so it's sometimes preferable to add pressure as an explicit
>> variable (or transform your existing variable set as part of a
>> preconditioner), in which case you get a differential algebraic equation
>> (the incompressible limit).
>>
>> Have you tried embedding the column integral into the RHS function to
>> make a single unsplit formulation?
>>
>> >  I had RK3 working before and SSP is much more stable, i can use way
>> bigger
>> > DTs but then i get this asynchronous time integration. With RK3 I can
>> > operate in the intermediate states and thus I can advance everything in
>> > synchronization, but bigger DTs are not viable, it turns unstable
>> quickly.
>> >
>> > On Wed, Oct 9, 2019 at 3:58 PM Jed Brown  wrote:
>> >
>> >> Is it a problem with the spatial discretization or with the time
>> >> discretization that you've been using thus far?  (This sort of problem
>> >> can occur for either reason.)
>> >>
>> >> Note that an SSP method is merely "preserving" -- the spatial
>> >> discretization needs to be strongly stable for an SSP method to preserve
>> >> it.  It sounds like yours is not, so maybe there is no particular
>> >> benefit to using SSP over any other method (but likely tighter time step
>> >> restriction).
>> >>
>> >> Manuel Valera  writes:
>> >>
>> >> > To correct for the deformation of the sigma coordinate grid... without
>> >> this
>> >> > correction the velocity become unphysical in the zones of high slope
>> of
>> >> the
>> >> > grid. This is very specific of our model and probably will be solved
>> by
>> >> > updating the equations transformation, but that's not nearly close to
>> >> > happening right now.
>> >> >
>> >> > On Wed, Oct 9, 2019 at 3:47 PM Jed Brown  wrote:
>> >> >
>> >> >> Manuel Valera  writes:
>> >> >>
>> >> >> > Thanks,
>> >> >> >
>> >> >> > My time integration schemes are all explicit, sorry if this a very
>> >> >> atypical
>> >> >> > setup. This is similar to the barotropic splitting but not
>> exactly, we
>> >> >> > don't have free surface in the model, this is only to correct for
>> >> sigma
>> >> >> > coordinates deformations in the velocity field.
>> >> >> >
>> >> >> > From how i see it this could be solved by obtaining the
>> intermediate
>> >> >> stages
>> >> >> > and then updating them accordingly, is this not possible to do ?
>> >> >>
>> >> >> Why are you splitting if all components are explicit and not
>> subcycled?
>> >> >>
>> >>
>>


Re: [petsc-users] 'Inserting a new nonzero' issue on a reassembled matrix in parallel

2019-10-23 Thread Matthew Knepley via petsc-users
On Tue, Oct 22, 2019 at 1:37 PM Thibaut Appel 
wrote:

> Hi both,
>
> Please find attached a tiny example (in Fortran, sorry Matthew) that - I
> think - reproduces the problem we mentioned.
>
> Let me know.
>
> Okay, I converted to C so I could understand, and it runs fine for me:

master *:~/Downloads/tmp$ PETSC_ARCH=arch-master-complex-debug make main

/PETSc3/petsc/bin/mpicc -o main.o -c -Wall -Wwrite-strings
-Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector
-Qunused-arguments -fvisibility=hidden -g3   -I/PETSc3/petsc/petsc-dev/include
-I/PETSc3/petsc/petsc-dev/arch-master-complex-debug/include
-I/PETSc3/petsc/include -I/opt/X11/include`pwd`/main.c

/PETSc3/petsc/bin/mpicc -Wl,-multiply_defined,suppress
-Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs
-Wl,-search_paths_first -Wl,-no_compact_unwind  -Wall -Wwrite-strings
-Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector
-Qunused-arguments -fvisibility=hidden -g3  -o main main.o
-Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-Wl,-rpath,/PETSc3/petsc/lib -L/PETSc3/petsc/lib -Wl,-rpath,/opt/X11/lib
-L/opt/X11/lib
-Wl,-rpath,/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0
-L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0
-Wl,-rpath,/usr/local/lib -L/usr/local/lib -lpetsc -lfftw3_mpi -lfftw3
-llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lchaco
-lparmetis -lmetis -ltriangle -lz -lX11 -lctetgen -lstdc++ -ldl -lmpichf90
-lpmpich -lmpich -lopa -lmpl -lpthread -lgfortran -lgcc_s.10.5 -lstdc++ -ldl

master *:~/Downloads/tmp$ ./main

After first assembly:

Mat Object: 1 MPI processes

  type: seqaij

row 0: (0, 1. + 1. i)

row 1: (1, 1. + 1. i)

row 2: (2, 1. + 1. i)

row 3: (3, 1. + 1. i)

row 4: (4, 1. + 1. i)

row 5: (5, 1. + 1. i)

row 6: (6, 1. + 1. i)

row 7: (7, 1. + 1. i)

row 8: (8, 1. + 1. i)

row 9: (9, 1. + 1. i)

After second assembly:

Mat Object: 1 MPI processes

  type: seqaij

row 0: (0, 1. + 1. i)

row 1: (1, 1. + 1. i)

row 2: (2, 1. + 1. i)

row 3: (3, 1. + 1. i)

row 4: (4, 1. + 1. i)

row 5: (5, 1. + 1. i)

row 6: (6, 1. + 1. i)

row 7: (7, 1. + 1. i)

row 8: (8, 1. + 1. i)

row 9: (9, 1. + 1. i)
row 0 col: 9 val: 0. + 0. i

I attach my code.  So then I ran your Fortran:

/PETSc3/petsc/bin/mpif90 -c -Wall -ffree-line-length-0
-Wno-unused-dummy-argument -Wno-unused-variable -g
-I/PETSc3/petsc/petsc-dev/include
-I/PETSc3/petsc/petsc-dev/arch-master-complex-debug/include
-I/PETSc3/petsc/include -I/opt/X11/include  -o main2.o main2.F90

/PETSc3/petsc/bin/mpif90 -Wl,-multiply_defined,suppress
-Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs
-Wl,-search_paths_first -Wl,-no_compact_unwind  -Wall -ffree-line-length-0
-Wno-unused-dummy-argument -Wno-unused-variable -g   -o main2 main2.o
-Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-Wl,-rpath,/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-L/PETSc3/petsc/petsc-dev/arch-master-complex-debug/lib
-Wl,-rpath,/PETSc3/petsc/lib -L/PETSc3/petsc/lib -Wl,-rpath,/opt/X11/lib
-L/opt/X11/lib
-Wl,-rpath,/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0
-L/usr/local/lib/gcc/x86_64-apple-darwin10.4.0/4.6.0
-Wl,-rpath,/usr/local/lib -L/usr/local/lib -lpetsc -lfftw3_mpi -lfftw3
-llapack -lblas -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lchaco
-lparmetis -lmetis -ltriangle -lz -lX11 -lctetgen -lstdc++ -ldl -lmpichf90
-lpmpich -lmpich -lopa -lmpl -lpthread -lgfortran -lgcc_s.10.5 -lstdc++ -ldl

master *:~/Downloads/tmp$ ./main2

 After first assembly:

Mat Object: 1 MPI processes

  type: seqaij

row 0: (0, 1.)

row 1: (1, 1.)

row 2: (2, 1.)

row 3: (3, 1.)

row 4: (4, 1.)

row 5: (5, 1.)

row 6: (6, 1.)

row 7: (7, 1.)

row 8: (8, 1.)

row 9: (9, 1.)

 After second assembly:

Mat Object: 1 MPI processes

  type: seqaij

row 0: (0, 1.)

row 1: (1, 1.)

row 2: (2, 1.)

row 3: (3, 1.)

row 4: (4, 1.)

row 5: (5, 1.)

row 6: (6, 1.)

row 7: (7, 1.)

row 8: (8, 1.)

row 9: (9, 1.)

 row:  0 col:  9 val:  0.00E+00  0.00E+00

I am not seeing an error. Am I not running it correctly?

  Thanks,

 MAtt

> Thibaut
>
>
> On 22/10/2019 17:48, Matthew Knepley wrote:
>
> On Tue, Oct 22, 2019 at 12:43 PM Thibaut Appel via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi Hong,
>>
>> Thank you for having a look, I copied/pasted your code snippet into
>> ex28.c and the error indeed appears if you change that col[0]. That's
>> because you did not allow a new non-zero location in the matrix with the
>> option MAT_NEW_NONZERO_LOCATION_ERR.
>>
>> I spent the day debugging the code and already checked my calls to
>> MatSetValues:
>>
>> For all MatSetValues calls corresponding to the row/col location in the
>> error messages in the subsequent 

Re: [petsc-users] anaconda installation of petsc

2019-10-23 Thread Balay, Satish via petsc-users
Likely this install is broken as mpicc [used here] is unable to find 'clang' 
used to built it.
And we have no idea how the petsc install in anaconda works.

Suggest installing PETSc from source - this is what we support if you encounter 
problems.

Satish

On Wed, 23 Oct 2019, Gideon Simpson via petsc-users wrote:

> I have an anaconda installation of petsc and I was trying to use it with
> some existing petsc codes, with the makefile:
> 
> include ${PETSC_DIR}/conf/variables
> include ${PETSC_DIR}/conf/rules
> 
> all: ex1
> 
> hello: ex1.o
> ${CLINKER} -o ex1 ex1.o ${LIBS} ${PETSC_LIB}
> 
> and PETSC_DIR=/Users/gideonsimpson/anaconda3/lib/petsc
> 
> However, when i try to call make on this, I get
> 
> mpicc -o ex1.o -c -march=core2 -mtune=haswell -mssse3 -ftree-vectorize
> -fPIC -fPIE -fstack-protector-strong -O2 -pipe -isystem
> /Users/gideonsimpson/anaconda3/include -O3 -D_FORTIFY_SOURCE=2
> -mmacosx-version-min=10.9 -isystem /Users/gideonsimpson/anaconda3/include
>  -I/Users/gideonsimpson/anaconda3/include  -D_FORTIFY_SOURCE=2
> -mmacosx-version-min=10.9 -isystem /Users/gideonsimpson/anaconda3/include
>  `pwd`/ex1.c
> /Users/gideonsimpson/anaconda3/bin/mpicc: line 282:
> x86_64-apple-darwin13.4.0-clang: command not found
> make: *** [ex1.o] Error 127
> 
> 
> 



Re: [petsc-users] Select a preconditioner for SLEPc eigenvalue solver Jacobi-Davidson

2019-10-23 Thread Jose E. Roman via petsc-users
Yes, it is confusing. Here is the explanation: when you use a target, the 
preconditioner is built from matrix A-sigma*B. By default, instead of 
TARGET_MAGNITUDE we set LARGEST_MAGNITUDE, and in Jacobi-Davidson we treat this 
case by setting sigma=PETSC_MAX_REAL. In this case, the preconditioner is built 
from matrix B. The thing is that in a standard eigenproblem we have B=I, and 
hence there is no point in using a preconditioner, that is why we set PCNONE.

Jose


> El 22 oct 2019, a las 19:57, Fande Kong via petsc-users 
>  escribió:
> 
> Hi All,
> 
> It looks like the preconditioner is hard-coded in the Jacobi-Davidson solver. 
> I could not select a preconditioner rather than the default setting.
> 
> For example, I was trying to select LU, but PC NONE was still used.  I ran 
> standard example 2 in slepc/src/eps/examples/tutorials, and had the following 
> results.
> 
> 
> Thanks,
> 
> Fande
> 
> 
> ./ex2 -eps_type jd -st_ksp_type gmres  -st_pc_type lu   -eps_view  
> 
> 2-D Laplacian Eigenproblem, N=100 (10x10 grid)
> 
> EPS Object: 1 MPI processes
>   type: jd
> search subspace is orthogonalized
> block size=1
> type of the initial subspace: non-Krylov
> size of the subspace after restarting: 6
> number of vectors after restarting from the previous iteration: 1
> threshold for changing the target in the correction equation (fix): 0.01
>   problem type: symmetric eigenvalue problem
>   selected portion of the spectrum: largest eigenvalues in magnitude
>   number of eigenvalues (nev): 1
>   number of column vectors (ncv): 17
>   maximum dimension of projected problem (mpd): 17
>   maximum number of iterations: 1700
>   tolerance: 1e-08
>   convergence test: relative to the eigenvalue
> BV Object: 1 MPI processes
>   type: svec
>   17 columns of global length 100
>   vector orthogonalization method: classical Gram-Schmidt
>   orthogonalization refinement: if needed (eta: 0.7071)
>   block orthogonalization method: GS
>   doing matmult as a single matrix-matrix product
> DS Object: 1 MPI processes
>   type: hep
>   solving the problem with: Implicit QR method (_steqr)
> ST Object: 1 MPI processes
>   type: precond
>   shift: 1.79769e+308
>   number of matrices: 1
>   KSP Object: (st_) 1 MPI processes
> type: gmres
>   restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
> with no iterative refinement
>   happy breakdown tolerance 1e-30
> maximum iterations=90, initial guess is zero
> tolerances:  relative=0.0001, absolute=1e-50, divergence=1.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
>   PC Object: (st_) 1 MPI processes
> type: none
> linear system matrix = precond matrix:
> Mat Object: 1 MPI processes
>   type: shell
>   rows=100, cols=100
>  Solution method: jd
> 
>  Number of requested eigenvalues: 1
>  Linear eigensolve converged (1 eigenpair) due to CONVERGED_TOL; iterations 20
>  -- 
> k ||Ax-kx||/||kx||
>  -- 
> 7.8379727.71944e-10
>  -- 
> 
> 
>