[petsc-dev] reproducing crashes in the test harness

2021-03-29 Thread Barry Smith

# FAILED snes_tutorials-ex12_quad_hpddm_reuse_threshold 
snes_tutorials-ex12_p4est_nc_singular_2d_hpddm snes_tutorials-ex56_hpddm 
snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij sys_tests-ex53_2 
snes_tutorials-ex12_quad_hpddm_reuse_baij snes_tutorials-ex12_quad_hpddm_reuse 
snes_tutorials-ex12_p4est_singular_2d_hpddm 
snes_tutorials-ex12_tri_parmetis_hpddm snes_tutorials-ex12_quad_singular_hpddm 
sys_tests-ex26_1 sys_tests-ex26_2 snes_tutorials-ex12_tri_parmetis_hpddm_baij 
snes_tutorials-ex12_tri_hpddm_reuse_baij snes_tutorials-ex12_tri_hpddm_reus

Scott,

  Any thoughts on how the test harness could tell the developer exactly how to 
reproduce a problematic cases in the debugger without them digging around in 
the code to check arguments etc.

  So for example "Run: mpiexec -n N ./xxx args -start_in_debugger" to reproduce 
this problem? Then one could just cut and paste and be debugging away.

  Thanks

  Barry



Re: [petsc-dev] Possible SF bug

2021-03-29 Thread Junchao Zhang
Matt,
  I can reproduce the error. Let me see what is wrong.
  Thanks.
--Junchao Zhang


On Mon, Mar 29, 2021 at 2:16 PM Matthew Knepley  wrote:

> Junchao,
>
> I have an SF problem, which I think is a caching bug, but it is hard to
> see what is happening in the internals. I have made a small example which
> should help you see what is wrong. It is attached.
>
> If you run without arguments, you get
>
> master *:~/Downloads/tmp/Salac$ ./forestHDF
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Trying to copy to a null pointer
> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.14.5-879-g03cacdc99d
>  GIT Date: 2021-03-22 01:02:08 +
> [0]PETSC ERROR: ./forestHDF on a arch-master-debug named
> MacBook-Pro.fios-router.home by knepley Mon Mar 29 15:14:16 2021
> [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-master-debug
> --download-bamg --download-chaco --download-ctetgen --download-egads
> --download-eigen --download-exodusii --download-fftw --download-hpddm
> --download-libpng --download-metis --download-ml --download-mumps
> --download-netcdf --download-opencascade --download-p4est
> --download-parmetis --download-pnetcdf --download-scalapack
> --download-slepc --download-suitesparse --download-superlu_dist
> --download-triangle --with-cmake-exec=/PETSc3/petsc/apple/bin/cmake
> --with-ctest-exec=/PETSc3/petsc/apple/bin/ctest
> --with-hdf5-dir=/PETSc3/petsc/apple --with-mpi-dir=/PETSc3/petsc/apple
> --with-shared-libraries --with-slepc --with-zlib --download-tetgen
> [0]PETSC ERROR: #1 PetscMemcpy() at
> /PETSc3/petsc/petsc-dev/include/petscsys.h:1798
> [0]PETSC ERROR: #2 UnpackAndInsert_PetscReal_1_1() at
> /PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:426
> [0]PETSC ERROR: #3 ScatterAndInsert_PetscReal_1_1() at
> /PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:426
> [0]PETSC ERROR: #4 PetscSFLinkScatterLocal() at
> /PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:1248
> [0]PETSC ERROR: #5 PetscSFBcastBegin_Basic() at
> /PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfbasic.c:193
> [0]PETSC ERROR: #6 PetscSFBcastWithMemTypeBegin() at
> /PETSc3/petsc/petsc-dev/src/vec/is/sf/interface/sf.c:1493
> [0]PETSC ERROR: #7 DMGlobalToLocalBegin() at
> /PETSc3/petsc/petsc-dev/src/dm/interface/dm.c:2565
> [0]PETSC ERROR: #8 VecView_Plex_HDF5_Internal() at
> /PETSc3/petsc/petsc-dev/src/dm/impls/plex/plexhdf5.c:251
> [0]PETSC ERROR: #9 VecView_Plex() at
> /PETSc3/petsc/petsc-dev/src/dm/impls/plex/plex.c:385
> [0]PETSC ERROR: #10 VecView_p4est() at
> /PETSc3/petsc/petsc-dev/src/dm/impls/forest/p4est/pforest.c:4922
> [0]PETSC ERROR: #11 VecView() at
> /PETSc3/petsc/petsc-dev/src/vec/vec/interface/vector.c:613
> [0]PETSC ERROR: #12 main() at
> /Users/knepley/Downloads/tmp/Salac/forestHDF.c:53
> [0]PETSC ERROR: PETSc Option Table entries:
> [0]PETSC ERROR: -malloc_debug
> [0]PETSC ERROR: End of Error Message ---send entire
> error message to petsc-ma...@mcs.anl.gov--
> application called MPI_Abort(MPI_COMM_SELF, 53001) - process 0
> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=53001
>
> If you run with
>
>   ./forestHDF -write_early
>
> or
>
>   ./forestHDF -no_g2l
>
> Then it is fine. Thus it appears to me that if you run a G2L at the wrong
> time, something is incorrectly cached.
>
>   Thanks,
>
> Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


[petsc-dev] Possible SF bug

2021-03-29 Thread Matthew Knepley
Junchao,

I have an SF problem, which I think is a caching bug, but it is hard to see
what is happening in the internals. I have made a small example which
should help you see what is wrong. It is attached.

If you run without arguments, you get

master *:~/Downloads/tmp/Salac$ ./forestHDF
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Trying to copy to a null pointer
[0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.14.5-879-g03cacdc99d
 GIT Date: 2021-03-22 01:02:08 +
[0]PETSC ERROR: ./forestHDF on a arch-master-debug named
MacBook-Pro.fios-router.home by knepley Mon Mar 29 15:14:16 2021
[0]PETSC ERROR: Configure options --PETSC_ARCH=arch-master-debug
--download-bamg --download-chaco --download-ctetgen --download-egads
--download-eigen --download-exodusii --download-fftw --download-hpddm
--download-libpng --download-metis --download-ml --download-mumps
--download-netcdf --download-opencascade --download-p4est
--download-parmetis --download-pnetcdf --download-scalapack
--download-slepc --download-suitesparse --download-superlu_dist
--download-triangle --with-cmake-exec=/PETSc3/petsc/apple/bin/cmake
--with-ctest-exec=/PETSc3/petsc/apple/bin/ctest
--with-hdf5-dir=/PETSc3/petsc/apple --with-mpi-dir=/PETSc3/petsc/apple
--with-shared-libraries --with-slepc --with-zlib --download-tetgen
[0]PETSC ERROR: #1 PetscMemcpy() at
/PETSc3/petsc/petsc-dev/include/petscsys.h:1798
[0]PETSC ERROR: #2 UnpackAndInsert_PetscReal_1_1() at
/PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:426
[0]PETSC ERROR: #3 ScatterAndInsert_PetscReal_1_1() at
/PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:426
[0]PETSC ERROR: #4 PetscSFLinkScatterLocal() at
/PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfpack.c:1248
[0]PETSC ERROR: #5 PetscSFBcastBegin_Basic() at
/PETSc3/petsc/petsc-dev/src/vec/is/sf/impls/basic/sfbasic.c:193
[0]PETSC ERROR: #6 PetscSFBcastWithMemTypeBegin() at
/PETSc3/petsc/petsc-dev/src/vec/is/sf/interface/sf.c:1493
[0]PETSC ERROR: #7 DMGlobalToLocalBegin() at
/PETSc3/petsc/petsc-dev/src/dm/interface/dm.c:2565
[0]PETSC ERROR: #8 VecView_Plex_HDF5_Internal() at
/PETSc3/petsc/petsc-dev/src/dm/impls/plex/plexhdf5.c:251
[0]PETSC ERROR: #9 VecView_Plex() at
/PETSc3/petsc/petsc-dev/src/dm/impls/plex/plex.c:385
[0]PETSC ERROR: #10 VecView_p4est() at
/PETSc3/petsc/petsc-dev/src/dm/impls/forest/p4est/pforest.c:4922
[0]PETSC ERROR: #11 VecView() at
/PETSc3/petsc/petsc-dev/src/vec/vec/interface/vector.c:613
[0]PETSC ERROR: #12 main() at
/Users/knepley/Downloads/tmp/Salac/forestHDF.c:53
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -malloc_debug
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_SELF, 53001) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=53001

If you run with

  ./forestHDF -write_early

or

  ./forestHDF -no_g2l

Then it is fine. Thus it appears to me that if you run a G2L at the wrong
time, something is incorrectly cached.

  Thanks,

Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


forestHDF.c
Description: Binary data


Re: [petsc-dev] petsc release plan for march 2021

2021-03-29 Thread Patrick Sanan
I added that milestone to some of the current docs MRs but it’s probably
too tight, so I suggest to remove - it probably doesn’t matter much what’s
in the tar all for docs so safer to have the current docs. We could have a
patch release which updates the docs for release, once we’re happy with the
docs build on main.

Satish Balay via petsc-dev  schrieb am So. 28. März
2021 um 20:58:

> Perhaps I should not have kept a weekend deadline here.
>
> Lets use 'freeze': 'March 29 (Mon) 5PM CST' - but retain the release date
> 'March 30 5PM EST (we have March 31 - if needed)
>
> Satish
>
>  On Sun, 28 Mar 2021, Satish Balay via petsc-dev wrote:
>
> > A reminder!
> >
> > Satish
> >
> > On Tue, 9 Mar 2021, Satish Balay via petsc-dev wrote:
> >
> > > All,
> > >
> > > Its time for another PETSc release - due end of March.
> > >
> > > For this release [3.15], will work with the following dates:
> > >
> > > - feature freeze: March 28 say 5PM EST
> > > - release: March 30 say 5PM EST
> > >
> > > Merges after freeze should contain only fixes that would normally be
> acceptable to release workflow.
> > >
> > > I've created a new milestone 'v3.15-release'. So if you are working on
> a MR with the goal of merging before release - its best to use this tag
> with the MR.
> > >
> > > And it would be good to avoid merging large changes at the last
> minute. And not have merge requests stuck in need of reviews, testing and
> other necessary tasks.
> > >
> > > And I would think the testing/CI resources would get stressed in this
> timeframe - so it would be good to use them judiciously if possible.
> > >
> > > - if there are failures in stage-2 or 3 - and its no longer necessary
> to complete all the jobs - one can 'cancel' the pipeline.
> > > - if a fix needs to be tested - one can first test with only the
> failed jobs (if this is known) - before doing a full test pipeline. i.e:
> > >- use the automatically started and paused 'merge-request' pipeline
> (or start new 'web' pipeline, and cancel it immediately)
> > >- now toggle only the jobs that need to be run
> > >- [on success of the selected jobs] if one wants to run the full
> pipeleine - click 'retry' - and the remaining canceled jobs should now get
> scheduled.
> > >
> > > Thanks,
> > > Satish
> > >
> >
>
>