Re: [petsc-dev] [petsc-users] Mat created by DMStag cannot access ghost points

2022-06-10 Thread Patrick Sanan
Sorry about the long delay on this.
https://gitlab.com/petsc/petsc/-/merge_requests/5329




Am Do., 2. Juni 2022 um 15:01 Uhr schrieb Matthew Knepley :

> On Thu, Jun 2, 2022 at 8:59 AM Patrick Sanan 
> wrote:
>
>> Thanks, Barry and Changqing! That seems reasonable to me, so I'll make an
>> MR with that change.
>>
>
> Hi Patrick,
>
> In the MR, could you add that option to all places we internally use
> Preallocator? I think we mean it for those.
>
>   Thanks,
>
>  Matt
>
>
>> Am Mi., 1. Juni 2022 um 20:06 Uhr schrieb Barry Smith :
>>
>>>
>>>   This appears to be a bug in the DMStag/Mat preallocator code. If you
>>> add after the DMCreateMatrix() line in your code
>>>
>>> PetscCall(MatSetOption(A, MAT_NO_OFF_PROC_ENTRIES, PETSC_FALSE));
>>>
>>> Your code will run correctly.
>>>
>>>   Patrick and Matt,
>>>
>>>   MatPreallocatorPreallocate_Preallocator() has
>>>
>>> PetscCall(MatSetOption(A, MAT_NO_OFF_PROC_ENTRIES, p->nooffproc));
>>>
>>> to make the assembly of the stag matrix from the preallocator matrix a
>>> little faster,
>>>
>>> but then it never "undoes" this call. Hence the matrix is left in the
>>> state where it will error if someone sets values from a different rank
>>> (which they certainly can using DMStagMatSetValuesStencil().
>>>
>>>  I think you need to clear the NO_OFF_PROC at the end
>>> of MatPreallocatorPreallocate_Preallocator() because just because the
>>> preallocation process never needed communication does not mean that when
>>> someone puts real values in the matrix they will never use communication;
>>> they can put in values any dang way they please.
>>>
>>> I don't know why this bug has not come up before.
>>>
>>>   Barry
>>>
>>>
>>> On May 31, 2022, at 11:08 PM, Ye Changqing 
>>> wrote:
>>>
>>> Dear all,
>>>
>>> [BugReport.c] is a sample code, [BugReportParallel.output] is the output
>>> when execute BugReport with mpiexec, [BugReportSerial.output] is the output
>>> in serial execution.
>>>
>>> Best,
>>> Changqing
>>>
>>> --
>>> *发件人:* Dave May 
>>> *发送时间:* 2022年5月31日 22:55
>>> *收件人:* Ye Changqing 
>>> *抄送:* petsc-us...@mcs.anl.gov 
>>> *主题:* Re: [petsc-users] Mat created by DMStag cannot access ghost points
>>>
>>>
>>>
>>> On Tue 31. May 2022 at 16:28, Ye Changqing 
>>> wrote:
>>>
>>> Dear developers of PETSc,
>>>
>>> I encountered a problem when using the DMStag module. The program could
>>> be executed perfectly in serial, while errors are thrown out in parallel
>>> (using mpiexec). Some rows in Mat cannot be accessed in local processes
>>> when looping all elements in DMStag. The DM object I used only has one DOF
>>> in each element. Hence, I could switch to the DMDA module easily, and the
>>> program now is back to normal.
>>>
>>> Some snippets are below.
>>>
>>> Initialise a DMStag object:
>>> PetscCall(DMStagCreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE,
>>> DM_BOUNDARY_NONE, M, N, PETSC_DECIDE, PETSC_DECIDE, 0, 0, 1,
>>> DMSTAG_STENCIL_BOX, 1, NULL, NULL, &(s_ctx->dm_P)));
>>> Created a Mat:
>>> PetscCall(DMCreateMatrix(s_ctx->dm_P, A));
>>> Loop:
>>> PetscCall(DMStagGetCorners(s_ctx->dm_V, , , , ,
>>> , , , , ));
>>> for (ey = starty; ey < starty + ny; ++ey)
>>> for (ex = startx; ex < startx + nx; ++ex)
>>> {
>>> ...
>>> PetscCall(DMStagMatSetValuesStencil(s_ctx->dm_P, *A, 2, [0], 2,
>>> [0], _A[0][0], ADD_VALUES));  // The traceback shows the problem is
>>> in here.
>>> }
>>>
>>>
>>> In addition to the code or MWE, please forward us the complete stack
>>> trace / error thrown to stdout.
>>>
>>> Thanks,
>>> Dave
>>>
>>>
>>>
>>> Best,
>>> Changqing
>>>
>>> 
>>>
>>>
>>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


[petsc-dev] Manual page improvements! (Docs MRs to main until PETSc 3.18 is released)

2022-05-04 Thread Patrick Sanan
We just merged a large change to the documentation into the main branch.
This integrates the manual pages into the documentation build with Sphinx,
so you can do things like

- Find manual page content with the website search box
- Add images, inline code, equations, citations, links to the users manual,
etc. on the man pages

Check it out at petsc.org/main/docs/manualpages . For an example of a
manual page updated to take advantage of the new features, search for
"PCFIELDSPLIT".

This is accomplished by allowing inline MyST (Markdown) syntax [1] in the
usual man page blocks in the source code. More details at the MR [2].

Unlike most previous docs changes, this has only been done on the main
branch. So, **until PETSc 3.18 is released, make documentation MRs to main**,
unless fixing something particularly critical on the release branch, or
making a change which you've tested won't cause (serious) merge conflicts
when release is merged into main.

I'm hoping that developers will try using the man pages at petsc.org/main
and thus be able to give feedback. There are a number of known issues, and
hopefully the most egregious of these can be resolved before the release.
You can find links to these issues in the MR description [2] or on the list
of "docs"-labelled issues on GitLab [3]. If so inclined,

- use the thumbs-up and thumbs-down buttons on the issues to help
prioritize
- comment about formatting on issue #1156 [4]
- suggest candidates for specific man pages to make prettier at  #1155 [5]
- add comments on other topics to existing issues or open new ones (with
the "docs" label)
- assign yourself to any issues you'd like to be involved in resolving

[1]: https://myst-parser.readthedocs.io/en/latest/index.html
[2]: https://gitlab.com/petsc/petsc/-/merge_requests/4989
[3]:
https://gitlab.com/petsc/petsc/-/issues/?sort=updated_desc=opened_name%5B%5D=docs
[4]: https://gitlab.com/petsc/petsc/-/issues/1156
[5]: https://gitlab.com/petsc/petsc/-/issues/1155
[6]: https://petsc.org/main/docs/manualpages/PC/PCFIELDSPLIT


Re: [petsc-dev] Tutorials Question

2022-02-19 Thread Patrick Sanan
We could include the ones that are somewhat complete at
petsc.org/release/tutorials, but in addition to general cleanup to ensure
they're clear enough to help more than hinder, I think it's essential to
replace the hard-coded code examples with excerpts from the tutorial
programs themselves, and to replace hard-coded output with test reference
output (and ideally do the same for the input so that also can't get out of
sync, but I don't know if we have a clean way to do that from what's in the
/* TEST */ blocks).




Am Sa., 19. Feb. 2022 um 02:14 Uhr schrieb Jacob Faibussowitsch <
jacob@gmail.com>:

> They would need to be cleaned up (many of them tell people to “cd
> $PETSC_DIR/$PETSC_ARCH/src/ksp/ksp/examples/tutorials”), but yeah they
> are a pretty useful on-ramp.
>
> It would also be cool if they followed some structured path, where the
> Vec->Mat->PC/KSP->SNES/DM/TS progression illustrated some evolution of
> abstraction for a single particular problem. Start from finite difference
> and build up to FEM maybe? Not sure.
>
> On Feb 18, 2022, at 18:58, Matthew Knepley  wrote:
>
> Should we put those back up?
>
>   Thanks,
>
> Matt
>
> On Fri, Feb 18, 2022 at 7:34 PM Jacob Faibussowitsch 
> wrote:
>
>> That was an early version of the new docs. Patrick, Hannah, Hong and I
>> wrote those tutorials as we were testing out the new format. The QuickStart
>> tutorial made it into the final set pretty much unchanged, but not sure if
>> the rest of the sections did.
>>
>> On Feb 18, 2022, at 15:23, Matthew Knepley  wrote:
>>
>> How did these?
>>
>>
>> https://wg-beginners.readthedocs.io/en/latest/tutorials/introductory_tutorial.html
>>
>> They are not on the new site, and people here liked them.
>>
>>Thanks,
>>
>>   Matt
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>
>
>


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-07 Thread Patrick Sanan
This makes that change to improve the behavior of the release branch:
https://gitlab.com/petsc/petsc/-/merge_requests/4818

For reference in the future, also see this related issue:
https://gitlab.com/petsc/petsc/-/issues/852

Am Fr., 4. Feb. 2022 um 12:03 Uhr schrieb Patrick Sanan <
patrick.sa...@gmail.com>:

> So, seems like a fix we should make at this point is to make
> MatPreallocator explicitly single use, throwing an error if
> MatPreallocatorPreallocate is called a second time. (Still open for further
> debate as to what to do in general but this quick patch to release to
> replace the crash with an error message seems justified in any case).
>
> Am Fr., 4. Feb. 2022 um 02:02 Uhr schrieb Jed Brown :
>
>> MatPreallocator stores "the nonzero structure" in a hash table so it can
>> be easily updated. A normal Mat stores it in a compressed (CSR) format that
>> is expensive to update.
>>
>> Marius Buerkle  writes:
>>
>> > Ok. I did not know that. I was under the impression that
>> MatPreallocator does actually not allocate the nonzeros and just stores the
>> nonzero structure. But if this is not the case then of course I just
>> duplicate the matrix.
>> >
>> > Thanks for the feedback.
>> >
>> >> Gesendet: Donnerstag, den 03.02.2022 um 03:09 Uhr
>> >> Von: "Jed Brown" 
>> >> An: "Marius Buerkle" , "Patrick Sanan" <
>> patrick.sa...@gmail.com>
>> >> Cc: "PETSc users list" , petsc-dev <
>> petsc-dev@mcs.anl.gov>
>> >> Betreff: Re: Aw: Re: [petsc-dev] [petsc-users]
>> MatPreallocatorPreallocate segfault with PETSC 3.16
>> >>
>> >> Marius Buerkle  writes:
>> >>
>> >> > Thanks for they reply. Yes the example works, this is how I was
>> doing it before. But the matrix is rather big and i need a matrix with the
>> same structure at various points in my code. So it was convenient to create
>> the matrix with preallocate, destroy it after using it to free the memory
>> and creating it again later with the same preallocate.
>> >> > Anyway it works with MatDuplicate for now.
>> >>
>> >> I think it should take *less* memory to destroy the preallocator and
>> duplicate the actual matrix than to destroy the matrix and persist the
>> preallocator. If that is not the case (or close enough), we can make it so.
>>
>


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-04 Thread Patrick Sanan
So, seems like a fix we should make at this point is to make
MatPreallocator explicitly single use, throwing an error if
MatPreallocatorPreallocate is called a second time. (Still open for further
debate as to what to do in general but this quick patch to release to
replace the crash with an error message seems justified in any case).

Am Fr., 4. Feb. 2022 um 02:02 Uhr schrieb Jed Brown :

> MatPreallocator stores "the nonzero structure" in a hash table so it can
> be easily updated. A normal Mat stores it in a compressed (CSR) format that
> is expensive to update.
>
> Marius Buerkle  writes:
>
> > Ok. I did not know that. I was under the impression that MatPreallocator
> does actually not allocate the nonzeros and just stores the nonzero
> structure. But if this is not the case then of course I just duplicate the
> matrix.
> >
> > Thanks for the feedback.
> >
> >> Gesendet: Donnerstag, den 03.02.2022 um 03:09 Uhr
> >> Von: "Jed Brown" 
> >> An: "Marius Buerkle" , "Patrick Sanan" <
> patrick.sa...@gmail.com>
> >> Cc: "PETSc users list" , petsc-dev <
> petsc-dev@mcs.anl.gov>
> >> Betreff: Re: Aw: Re: [petsc-dev] [petsc-users]
> MatPreallocatorPreallocate segfault with PETSC 3.16
> >>
> >> Marius Buerkle  writes:
> >>
> >> > Thanks for they reply. Yes the example works, this is how I was doing
> it before. But the matrix is rather big and i need a matrix with the same
> structure at various points in my code. So it was convenient to create the
> matrix with preallocate, destroy it after using it to free the memory and
> creating it again later with the same preallocate.
> >> > Anyway it works with MatDuplicate for now.
> >>
> >> I think it should take *less* memory to destroy the preallocator and
> duplicate the actual matrix than to destroy the matrix and persist the
> preallocator. If that is not the case (or close enough), we can make it so.
>


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-02 Thread Patrick Sanan
There is also the hedge of adding a parameter and API function to control
which of these two behaviors is used, and if trying to preallocate twice,
throwing an error that instructs the user how to change the behavior,
noting that it will increase peak memory usage.

Am Di., 1. Feb. 2022 um 17:07 Uhr schrieb Jed Brown :

> Stefano Zampini  writes:
>
> > Il giorno mar 1 feb 2022 alle ore 18:34 Jed Brown  ha
> > scritto:
> >
> >> Patrick Sanan  writes:
> >>
> >> > Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown  >:
> >> >
> >> >> Patrick Sanan  writes:
> >> >>
> >> >> > Sorry about the delay on this. I can reproduce.
> >> >> >
> >> >> > This regression appears to be a result of this optimization:
> >> >> > https://gitlab.com/petsc/petsc/-/merge_requests/4273
> >> >>
> >> >> Thanks for tracking this down. Is there a reason to prefer
> preallocating
> >> >> twice
> >> >>
> >> >>ierr =
> >> >> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A);CHKERRQ(ierr);
> >> >>ierr =
> >> >>
> >>
> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A_duplicate);CHKERRQ(ierr);
> >> >>
> >> >> versus using MatDuplicate() or MatConvert()?
> >> >>
> >>
> >
> > Jed
> >
> > this is not the point. Suppose you pass around only a preallocator, but
> do
> > not pass around the matrices. Reusing the preallocator should be allowed.
>
> The current code is not okay (crashing is not okay), but we should decide
> whether to consume the preallocator or to retain the data structure. Peak
> memory use is the main reason hash-based allocation hasn't been default and
> wasn't adopted sooner. Retaining the hash until the preallocator is
> destroyed increases that peak.
>


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-01 Thread Patrick Sanan
That works, as in the attached example - Marius, would that work for your
case?

Am Di., 1. Feb. 2022 um 16:33 Uhr schrieb Jed Brown :

> Patrick Sanan  writes:
>
> > Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :
> >
> >> Patrick Sanan  writes:
> >>
> >> > Sorry about the delay on this. I can reproduce.
> >> >
> >> > This regression appears to be a result of this optimization:
> >> > https://gitlab.com/petsc/petsc/-/merge_requests/4273
> >>
> >> Thanks for tracking this down. Is there a reason to prefer preallocating
> >> twice
> >>
> >>ierr =
> >> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A);CHKERRQ(ierr);
> >>ierr =
> >>
> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A_duplicate);CHKERRQ(ierr);
> >>
> >> versus using MatDuplicate() or MatConvert()?
> >>
> >
> > Maybe if your preallocation is an overestimate for each of two different
> > post-assembly non-zero structures in A and A_duplicate?
>
> Even then, why not preallocate A and duplicate immediately, before
> compressing out zeros?
>


ex251.c
Description: Binary data


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-01 Thread Patrick Sanan
Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :

> Patrick Sanan  writes:
>
> > Sorry about the delay on this. I can reproduce.
> >
> > This regression appears to be a result of this optimization:
> > https://gitlab.com/petsc/petsc/-/merge_requests/4273
>
> Thanks for tracking this down. Is there a reason to prefer preallocating
> twice
>
>ierr =
> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A);CHKERRQ(ierr);
>ierr =
> MatPreallocatorPreallocate(preallocator,PETSC_TRUE,A_duplicate);CHKERRQ(ierr);
>
> versus using MatDuplicate() or MatConvert()?
>

Maybe if your preallocation is an overestimate for each of two different
post-assembly non-zero structures in A and A_duplicate?


Re: [petsc-dev] [petsc-users] MatPreallocatorPreallocate segfault with PETSC 3.16

2022-02-01 Thread Patrick Sanan
Sorry about the delay on this. I can reproduce.

This regression appears to be a result of this optimization:
https://gitlab.com/petsc/petsc/-/merge_requests/4273

The changes there including having MatPreallocator destroy its internal
hash structure within MatPreallocatorPreallocate(), which allows for a
lower overall memory footprint but prevents usage of the same
MatPreallocate object for two Mats. The error you see is because this hash
structure was destroyed during the first preallocation. We didn't catch
this because our test suite doesn't test that usage.

cc'ing PETSc dev because I'm not sure how to best resolve this - enforce
that a MatPreallocator is only "good once", remove the PetscHSetIJDestroy()
calls and accept the bigger memory footprint, or something else more clever?

My test to reproduce with C, which can be included in our fix in
src/mat/tests , attached.

Am Mo., 24. Jan. 2022 um 10:33 Uhr schrieb Marius Buerkle :

>
> Hi,
>
> I try to use MatPreallocatorPreallocate to allocate a MATMPIAIJ matrix A .
> I define the MATPREALLOCATOR preM with MatSetValues and then call
> MatPreallocatorPreallocate to get A. This works on the first call to
> MatPreallocatorPreallocate, but if I call MatPreallocatorPreallocate  again
> with the same preM to get another matrix B then I get a segfault, although
> the program continues to run (see below). It worked with PETSC 3.15 but
> with 3.16 I stopped working.
> When I check mat_info_nz_allocated and mat_info_nz_used for the allocated
> matrix it looks correct for the first call, but on the second call
> mat_info_nz_used is 0. I also attached a minimal example.
>
>
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [1]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Null Pointer: Parameter # 1
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: [1]PETSC ERROR: Null argument, when expecting valid pointer
> [1]PETSC ERROR: Petsc Development GIT revision: v3.16.3-686-g5e81a90  GIT
> Date: 2022-01-23 05:13:26 +
> [0]PETSC ERROR: ./prem_test on a  named cd001 by cdfmat_marius Mon Jan 24
> 18:21:17 2022
> [0]PETSC ERROR: Null Pointer: Parameter # 1
> [1]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [1]PETSC ERROR: Configure options
> --prefix=/home/cdfmat_marius/prog/petsc/petsc_main_dbg
> --with-scalar-type=complex --with-fortran-kernels=1 --with-64-bit-indices=0
> --CC=mpiicc --COPTFLAGS="-g -traceback" --CXX=mpiicpc --CXXOPTFLAGS="-g
> -traceback" --FC=mpiifort --FOPTFLAGS="-g -traceback" --with-mpi=1
> --with-x=0 --with-cuda=0
> --download-parmetis=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.parmetis.tar.gz
> --download-parmetis-commit=HEAD
> --download-metis=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.metis.tar.gz
> --download-metis-commit=HEAD
> --download-slepc=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.slepc_main.tar.gz
> --download-slepc-commit=HEAD
> --download-superlu_dist=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.superlu_dist.tar.gz
> --download-superlu_dist-commit=HEAD
> --download-mumps=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.mumps.tar.gz
> --download-mumps-commit=HEAD
> --download-hypre=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.hypre.tar.gz
> --download-hypre-commit=HEAD
> --download-hwloc=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/hwloc-2.5.0.tar.gz
> --download-sowing=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.sowing.tar.gz
> --download-elemental=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.elemental.tar.gz
> --download-elemental-commit=HEAD
> --download-make=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/make-4.2.1-6.fc28.tar.gz
> --download-ptscotch=/home/cdfmat_marius/prog/petsc/git/petsc_main/externalpackages/git.ptscotch.tar.gz
> --download-ptscotch-commit=HEAD --with-openmp=0 --with-pthread=0
> --with-cxx-dialect=c++11 --with-debugging=1 --with-cuda=0 --with-cudac=0
> --with-valgrind=0 --with-blaslapack-lib="-mkl=sequential
> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lpthread -lm -ldl"
> --with-scalapack-lib="-mkl=sequential -lmkl_scalapack_lp64
> -lmkl_blacs_intelmpi_lp64 -lpthread -lm -ldl"
> --with-mkl_pardiso-dir=/home/appli/intel/compilers_and_libraries_2020.4.304/linux/mkl
> --with-mkl_cpardiso-dir=/home/appli/intel/compilers_and_libraries_2020.4.304/linux/mkl
> --with-mkl_sparse-dir=/home/appli/intel/compilers_and_libraries_2020.4.304/linux/mkl
> --with-mkl_sparse_optimize-dir=/home/appli/intel/compilers_and_libraries_2020.4.304/linux/mkl
> [0]PETSC ERROR: Petsc Development GIT revision: v3.16.3-686-g5e81a90  GIT
> 

Re: [petsc-dev] Gitlab workflow discussion with GitLab developers

2022-01-21 Thread Patrick Sanan
Very much agreed that the biggest sort of friction is dealing with MRs from
forks. I suspect that the reason many of the things we want don't work is
because they would be too dangerous to allow a random, possibly malicious,
user to do. E.g. setting labels seems innocuous enough, but all kinds of
workflows, including automated ones, could be based on them. A more likely
problem in our case is that someone could open an MR with
"workflow::Ready-to-Merge" because they guess that it means that from their
perspective it's ready (when to us it means more than that). It would be
easy for that to get merged before being reviewed.

So in asking about all this, maybe we should make sure that we understand
the privilege levels GitLab offers, as maybe we can address the usual case
that the outside person making an MR is a researcher or engineer that one
of us knows (of) and so has some degree of trust in, so there would be no
huge risk in giving them the ability to change labels etc.

(And my pet peeve is that my "todo list" is still swamped by "X set you as
an approver for Y".)

Am Fr., 21. Jan. 2022 um 06:53 Uhr schrieb Barry Smith :

>
>
> On Jan 20, 2022, at 10:40 PM, Junchao Zhang 
> wrote:
>
> *  Email notification when one is mentioned or added as a reviewer
>
>
>Hmm, I get emails on these? I don't get email saying I am code owner
> for a MR
>
> *  Color text in comment box
> *  Click a failed job, run the job with the *updated* branch
> *  Allow one to reorder commits (e.g., the fix up commits generated from
> applying comments) and mark commits that should be fixed up
> *  Easily retarget a branch, e.g., from main to release (currently I have
> to checkout to local machine, do rebase, then push)
>
> --Junchao Zhang
>
>
> On Thu, Jan 20, 2022 at 7:05 PM Barry Smith  wrote:
>
>>
>>   I got asked to go over some of my Gitlab workflow uses next week with
>> some Gitlab developers; they do this to understand how Gitlab is used, how
>> it can be improved etc.
>>
>>   If anyone has ideas on topics I should hit, let me know. I will hit
>> them on the brokenness of appropriate code-owners not being automatically
>> added to reviewers. And support for people outside of the Petsc group to
>> set more things when they make MRs. And being to easily add non-PETSc folks
>> as reviewers.
>>
>>   Barry
>>
>>
>


Re: [petsc-dev] Speeding up building docs

2022-01-14 Thread Patrick Sanan
https://gitlab.com/petsc/petsc/-/issues/1084

Am Sa., 15. Jan. 2022 um 07:07 Uhr schrieb Patrick Sanan <
patrick.sa...@gmail.com>:

> Yes, that's a good point - there's no reason to do the copy if we're
> already assuming the source is up to date, so we can easily just check if
> the destination already has data!
>
> Also, while the build time certainly annoys me as well, it stopped being
> as much of a priority to improve because it now happens after the main
> Sphinx logic - if the thing you're iterating on is on the .rst pages, you
> can examine it before the copies and link fixes are done (though obviously
> the classic pages may not exist or be current, and the links may still be
> broken, with placeholders in them).
>
> Am Fr., 14. Jan. 2022 um 22:48 Uhr schrieb Barry Smith :
>
>>
>>   Patrick,
>>
>> Building docs is so much faster now, thanks! But when I get
>>
>> Assuming that the classic docs in
>> /Users/barrysmith/Src/petsc/doc/_build_classic are current
>>
>> it still does
>>
>> 
>> Copying classic docs from conf.py
>> 
>> 
>> Copying directory
>> /Users/barrysmith/Src/petsc/doc/_build_classic/docs/manualpages to
>> /Users/barrysmith/Src/petsc/doc/_build/html/docs/manualpages
>> 
>> 
>> Copying directory /Users/barrysmith/Src/petsc/doc/_build_classic/include
>> to /Users/barrysmith/Src/petsc/doc/_build/html/include
>> 
>> 
>> Copying directory /Users/barrysmith/Src/petsc/doc/_build_classic/src to
>> /Users/barrysmith/Src/petsc/doc/_build/html/src
>> 
>> 
>> Fixing relative links from conf.py
>> 
>> 
>> Adding version to classic man pages, from conf.py
>> 
>>
>> Would it be possible to rig the dependencies so that the copies and fixes
>> are only done when needed, instead of every time?
>>
>> Thanks
>>
>> Barry
>>
>>
>>


Re: [petsc-dev] Speeding up building docs

2022-01-14 Thread Patrick Sanan
Yes, that's a good point - there's no reason to do the copy if we're
already assuming the source is up to date, so we can easily just check if
the destination already has data!

Also, while the build time certainly annoys me as well, it stopped being as
much of a priority to improve because it now happens after the main Sphinx
logic - if the thing you're iterating on is on the .rst pages, you can
examine it before the copies and link fixes are done (though obviously the
classic pages may not exist or be current, and the links may still be
broken, with placeholders in them).

Am Fr., 14. Jan. 2022 um 22:48 Uhr schrieb Barry Smith :

>
>   Patrick,
>
> Building docs is so much faster now, thanks! But when I get
>
> Assuming that the classic docs in
> /Users/barrysmith/Src/petsc/doc/_build_classic are current
>
> it still does
>
> 
> Copying classic docs from conf.py
> 
> 
> Copying directory
> /Users/barrysmith/Src/petsc/doc/_build_classic/docs/manualpages to
> /Users/barrysmith/Src/petsc/doc/_build/html/docs/manualpages
> 
> 
> Copying directory /Users/barrysmith/Src/petsc/doc/_build_classic/include
> to /Users/barrysmith/Src/petsc/doc/_build/html/include
> 
> 
> Copying directory /Users/barrysmith/Src/petsc/doc/_build_classic/src to
> /Users/barrysmith/Src/petsc/doc/_build/html/src
> 
> 
> Fixing relative links from conf.py
> 
> 
> Adding version to classic man pages, from conf.py
> 
>
> Would it be possible to rig the dependencies so that the copies and fixes
> are only done when needed, instead of every time?
>
> Thanks
>
> Barry
>
>
>


[petsc-dev] Help tracking down unexpected Fortran behavior

2021-12-06 Thread Patrick Sanan
I ran into an unexpected seg fault, which took me too long to realize was
because of the old-school "you forgot the ierr" mistake! I was expecting
the compiler to complain, since we've had better checking for a while. E.g.
as in the attached code to reproduce, my compiler indeed errors on this

call PCSetType(pc, PCLU)

but not this

   call PCFactorSetMatOrderingType(pc, MATORDERINGEXTERNAL)

I'm not yet seeing what the difference is, but there is still plenty I
don't understand about how the custom fortran interfaces work. E.g. both of
those functions have custom interfaces in ftn-custom directories, accepting
an additional "len" argument to be used with FIXCHAR(), but I'm  not sure
how that argument is ultimately populated.


ex999f.F90
Description: Binary data


Re: [petsc-dev] [DocTip!] #2: Aiming for self-updating docs

2021-11-06 Thread Patrick Sanan
Am Fr., 5. Nov. 2021 um 19:18 Uhr schrieb Lawrence Mitchell :

>
> > On 5 Nov 2021, at 15:25, Patrick Sanan  wrote:
> >
> > Good question. We don't have any that start at specific line numbers,
> currently, as it is indeed too brittle - I suspect I removed some that were
> there at one point.
> >
> > The inclusions essentially all use :start-at: to specify a line to
> match. Ideally you'd also use :end-at: to specify where to stop, but some
> short snippets do instead specify a number of lines. Note that a
> practically handy option is :append:, as in this example where you are
> trying to excerpt the function PetscError(), and you can match on the
> PetscFunctionReturn(0) and then re-add the closing brace.
> >
> > .. literalinclude::/../src/sys/error/err.c
> > :start-at: PetscErrorCode PetscError(
> > :end-at: PetscFunctionReturn(0)
> > :append: }
>
> FWIW, this feels a bit like trying to reinvent WEB.
>
I don't know what WEB is, but if you're saying that this is kinda clunky,
yes it definitely is - my only contention is that it's better than
copy-pasting code and output.  I'm not sure if there's an easier and/or
better way with Sphinx.

>
> Doing actual literate documentation of key tutorial programs would be a
> nice way of doing this, but I realise that's a lot more effort.
>
This is still a hope/plan to go into doc/tutorials - follow the deal.ii
model for a small number of key examples. Matt has done a couple of pages
there already, in this direction.

Lawrence


[petsc-dev] [DocTip!] #3 CI docs build and preview

2021-11-06 Thread Patrick Sanan
All MRs now build all the documentation. This is done with the
"docs-review" job (defined in .gitlab-ci.yml ) . There is special logic
associated with the "docs-only" label: if you add this to your MR, the
usual heavy library tests will be skipped, and the docs will be built
without you having to manually "un-pause" the pipelines each time you
update the MR.

Jed showed us a nice approach where the results of this build are deployed
to GitLab pages, allowing us to use a "Review App" to see the documentation
build corresponding to the MR branch.

Practically, this allows you to simply click "view app" on the MR page and
you'll be taken to a complete version of the site (as on petsc.org).

You may still want to build the docs locally if you want to iterate (more)
quickly, as this allows you to quickly regenerate them after making changes
(as opposed to the many minutes docs-review takes), and keep open a web
browser pointing to your local build.

Unfortunately, I don't know a simple way to reliably preview a single .rst
file (as you might be used to from working with some Markdown-based tools).


Re: [petsc-dev] [DocTip!] #2: Aiming for self-updating docs

2021-11-05 Thread Patrick Sanan
Good question. We don't have any that start at specific line numbers,
currently, as it is indeed too brittle - I suspect I removed some that were
there at one point.

The inclusions essentially all use :start-at: to specify a line to match.
Ideally you'd also use :end-at: to specify where to stop, but some short
snippets do instead specify a number of lines. Note that a practically
handy option is :append:, as in this example where you are trying to
excerpt the function PetscError(), and you can match on the
PetscFunctionReturn(0) and then re-add the closing brace.

.. literalinclude:: /../src/sys/error/err.c
   :start-at: PetscErrorCode
<https://petsc.org/release/docs/manualpages/Sys/PetscErrorCode.html#PetscErrorCode>
PetscError 
<https://petsc.org/release/docs/manualpages/Sys/PetscError.html#PetscError>(
   :end-at: PetscFunctionReturn
<https://petsc.org/release/docs/manualpages/Sys/PetscFunctionReturn.html#PetscFunctionReturn>(0)
   :append: }




Am Fr., 5. Nov. 2021 um 15:00 Uhr schrieb Matthew Knepley :

> Question about this. Do we have any literalincludes that are indexed by
> line number? I thought I have done this, and your discussion
> below makes me realize that this is fragile. There is still some fragility
> with function names, but that ostensibly can be caught by the
> mechanism itself.
>
>   Thanks,
>
>  Matt
>
> On Fri, Nov 5, 2021 at 4:01 AM Patrick Sanan 
> wrote:
>
>> A big problem with documentation is that it goes out of date. This
>> quickly takes something that was useful to something which is actually
>> harmful. Showing the user things that don't work exactly as described is
>> worse than showing them nothing.
>>
>> So, it's good to aspire to writing docs in as future-proof a way as
>> possible, especially given the broad scope and rapid rate of change in
>> PETSc.
>>
>> One step in this direction that we use currently is to avoid including
>> explicit code blocks and output [2], but rather excerpt from the src/ tree
>> for both code and the same output used for the test suite.
>>
>> An example of where this is done is in the classic "hand-on" tutorials
>> [1]: these used to be part of an HTML page with the output inline, but now
>> the source uses syntax like this:
>>
>> .. literalinclude:: /../src/ksp/ksp/tutorials/output/ex50_tut_3.out
>> :language: none
>>
>> This isn't a bulletproof approach yet - the output can still be out of
>> date, but good enough to pass the test suite (though that can at least be
>> bulk-updated with a single command), and the input command is still
>> literally included. Still, it reduces the maintenance burden of
>> documentation and reduces the chance of confusing the reader with stale
>> information.
>>
>> [1]: https://petsc.org/release/tutorials/handson/
>> [2]:
>> https://petsc.org/release/developers/documentation/#sphinx-documentation-guidelines
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


[petsc-dev] [DocTip!] #2: Aiming for self-updating docs

2021-11-05 Thread Patrick Sanan
A big problem with documentation is that it goes out of date. This quickly
takes something that was useful to something which is actually harmful.
Showing the user things that don't work exactly as described is worse than
showing them nothing.

So, it's good to aspire to writing docs in as future-proof a way as
possible, especially given the broad scope and rapid rate of change in
PETSc.

One step in this direction that we use currently is to avoid including
explicit code blocks and output [2], but rather excerpt from the src/ tree
for both code and the same output used for the test suite.

An example of where this is done is in the classic "hand-on" tutorials [1]:
these used to be part of an HTML page with the output inline, but now the
source uses syntax like this:

.. literalinclude:: /../src/ksp/ksp/tutorials/output/ex50_tut_3.out
:language: none

This isn't a bulletproof approach yet - the output can still be out of
date, but good enough to pass the test suite (though that can at least be
bulk-updated with a single command), and the input command is still
literally included. Still, it reduces the maintenance burden of
documentation and reduces the chance of confusing the reader with stale
information.

[1]: https://petsc.org/release/tutorials/handson/
[2]:
https://petsc.org/release/developers/documentation/#sphinx-documentation-guidelines


[petsc-dev] [DocTip!] #1 : What does the docs build build? What can I delete?

2021-11-03 Thread Patrick Sanan
This is an experiment to try to disseminate information about the docs.
Things here will be less formal and more practical than what might be at
petsc.org/release/developers/documentation, and importantly, because this
is a mailing list, I can describe things that will go out of date as we
improve the docs but is nonetheless useful to current development.

On to tip #1 !

What happens when you build the docs locally by following our instructions?
What files are generated and can I delete them?

The basic instructions are here (Bonus tip: make sure you don't
accidentally put petsc-docs-env inside of doc/, as that will greatly
confuse Sphinx as it locates its own .rst files!).
https://petsc.org/release/developers/documentation/#building-the-html-docs-locally

1. doc/makefile calls sphinx-build
2. sphinx-build looks at doc/conf.py and searches the whole doc/ tree for
.rst files
3. It then uses the html builder to generate _build/html .
4. You open _build/html/index.html

That's not the whole story, though, because we still rely on the "classic"
docs system which has a lot of useful logic we haven't made more
Sphinx/python native yet. This is controlled in two ways:

1. With a custom extension in doc/ext, relying on the classic "htmlmap" file
2. With some custom hooks defined at the end of conf.py

Because of these, when you run sphinx-build, PETSc itself will be
configured as minimally as possible to run "make alldoc12", and you'll end
up with an arch-classic-docs directory and a doc/_build_classic directory .
You'll see (lots of) additional output due to this configuration, and some
banners at the end of the build as the generated files are copied and
placeholder relative links required by the extension are resolved.

That's still not the whole story, though, because we have a special, "add
files only", images repository. Thus, in doc/makefile, you'll see very
simple logic which does a Git clone to populate doc/images from
gitlab.com/petsc/images .

So, I end up with these untracked directories:

doc/_build
Sphinx's output directory. Can be generated quite quickly, so delete
whenever you'd like. "make clean" in doc/ deletes this.

doc/_build_classic
Output for the classic docs build. Delete when desired, but realize
that regenerating it can be slow, so only do so if e.g. you've changed the
man pages. "make clean" in doc/ also deletes this.

doc/images
Will not currently be updated in any intelligent way, so if you have
images issues, make sure you haven't got any of your own new stuff in here,
delete it, and let it be re-cloned.

arch-classic-docs
Delete as desired, but in general you shouldn't need to.


Re: [petsc-dev] goggle still not good at finding petsc.org docs

2021-10-12 Thread Patrick Sanan
For some reason for that term, Google returns a "petsc-3.5" URL instead of
a "petsc-current" URL (which redirects). Maybe this was a problem even
before we migrated to Sphinx? I don't have a great idea on how to fix,
other than to slap a "THIS IS NOT CURRENT" label and a link on the top of
all those pages (which is something nice to do, mimicking what
Scikit-learn's website does).

Re the search, it is indeed not rigged to search the man pages (or the HTML
source), because those aren't fully integrated into Sphinx - they are
generated by the "classic" process (involving configuring PETSc) and then
copied into place. Maybe the most efficient way to resolve this is to push
harder to make the man pages properly integrated into the Sphinx build.

In the interest of making public the state of the docs more, I'm going to
transfer the rest of my local notes to GitLab issues with the "doc" label
(for this one, see #1015).

Am Mo., 11. Okt. 2021 um 22:41 Uhr schrieb Barry Smith :

>
>Googling TSTHETA leads to an ancient MCS page for the manual page not
> to petsc.org
>
>
>   Searching for TSTHETA at petsc.org does not find the manual page; it
> looks like the search is not rigged to search the manual pages?
>
>   Barry
>
>


[petsc-dev] Postdoctoral position at ETH Zurich: Geodynamics / HPC / Julia

2021-08-31 Thread Patrick Sanan
The Geophysical Fluid Dynamics group at ETH Zurich (Switzerland) is seeking
a postdoctoral appointee to work for about 2.5 years on an ambitious
project involving developing a Julia-based library for GPU-accelerated
multiphysics solvers based on pseudotransient relaxation.

Of particular interest for this audience might be that a major component of
the proposed work is to make these solvers available via PETSc (as a SNES
implementation), thus exposing them for use within a host of existing HPC
applications, including those involved in this specific project.

We'll accept applications until the position is filled, but for full
consideration please apply before October 1, 2021.

Full information is in the ad at the following link, and please feel free
to contact me directly!
https://github.com/psanan/gpu4geo_postdoc_ad/

Best,
Patrick


Re: [petsc-dev] links from manual pages to users manual

2021-05-27 Thread Patrick Sanan


> Am 26.05.2021 um 18:39 schrieb Jed Brown :
> 
> Patrick Sanan mailto:patrick.sa...@gmail.com>> 
> writes:
> 
>>> Am 25.05.2021 um 22:58 schrieb Barry Smith :
>>> 
>>> 
>>>  Now that the users manual is html and we can properly link into it, it 
>>> would be great to have links from the manual pages to appropriate locations 
>>> in the users manual. For example SNESSetFunction.html would have a link to 
>>> the generated Spinx location where SNESSetFunction is discussed.
>>> 
>>>  How do we go about doing this? 
>>> 
>>>  Not only is this useful for users but when developers are fixing/improving 
>>> a manual page it would be nice if they had a way to jump directly to the 
>>> appropriate place in the xxx.rst that that discusses the manual page to 
>>> check that that material is also up-to-date and correct. So I guess we need 
>>> a way to link to the correct place in the .rst and the generated .html
>>> 
>> This all depends on which approach we take to make the man pages better 
>> integrated. There are competing requirements so I think it'll have to be 
>> hashed out to find the correct compromise
>> 
>> - we need to leave things for Sowing to generate Fortran stubs
>> - we want to be able to write the man pages as .rst, like the rest of the 
>> Sphinx docs
> 
> Or Markdown; see recent activity in this issue.
> 
> https://github.com/executablebooks/MyST-Parser/issues/228#issuecomment-848505703
>  
> <https://github.com/executablebooks/MyST-Parser/issues/228#issuecomment-848505703>

This is cool to know about - if I'm reading this correctly it's not ready for 
us to immediately adopt, but  I don't think you'll find many who like RST more 
than Markdown, so if there's a good chance to switch from RST to MyST at some 
point, I think it makes sense (say if there's a robust automatic rst-to-myst 
convertor, which seems to almost exist 
<https://github.com/executablebooks/rst-to-myst> now). My hesitancy to do it 
now comes from RST's privileged status as a the dumb default which you can 
google about,  and not wanting to incrementally introduce MyST because I fear 
that'll make things worse for people wanting to edit (two new things to learn, 
can't copy-paste things between the two).

> 
>> - we want the man pages inline with the source
>> - we don't want to have to manually update all the man pages
>> - we want to avoid introducing brittle scripting, if possible
>> 
>>> 
>>>  Thanks
>>> 
>>>   Barry
>>> 
>>> In the old users manual I had it rigged to have a link to the manual page 
>>> for every occurrence of a word that had a manual page in the users manual. 
>>> Is that feature lost now? Is there anyway to bring it back?
>>> 
>>> 
>> 
>> This is lost, I think. What sorts of words were these? Once we have links 
>> from the man pages to the manual, as above, would it be just as good to 
>> directly link to sections of the manual? 



Re: [petsc-dev] links from manual pages to users manual

2021-05-27 Thread Patrick Sanan



> Am 27.05.2021 um 00:24 schrieb Barry Smith :
> 
> 
> 
>  Jed pointed out previously that ideally we would not just generate a million 
> pages of .rst or markdown but we would have the structure of a manual page as 
> an abstract object on which one could write code to validate, to add new 
> information and to output in a particular .rst or markdown. But the abstract 
> object is what we would use to check that all manual pages have the 
> information they need and suitable seealso: etc. I would think doing this in 
> Python (maybe using parts of a Python front end to LLVM to do the function 
> call parsing?) would make sense but are we really going to have the time to 
> design the object, write a decent parser and write the printers? Sowing 
> doesn't really have the abstract object concept, it is local in that reads in 
> a stream and outputs what it finds almost immediately instead of building a 
> complete intermediate form. With this intermediate form generating Fortran, 
> Python, other language binds should be relatively easy and could handle 
> things like character arguments, function pointer arguments etc better. The 
> intermediate form would know which arguments are optional etc. In a sense 
> this intermediate form could define the PETSc API (for functions, structures, 
> enums, macros,...). 
> 
>  But this sounds a bit PETSc future, we can still improve things with our 
> current infrastructure.

Yes, so I think you're suggesting that we add some more processing of the 
current man pages, which allows insertion of links to pages in the Sphinx docs.

> 
>  Sorry, I was not clear, all function names, enums etc in the users manual 
> were automatically replaced with a link to the manual page. Can we do that 
> with Spinx, seems like not to much to ask for. 
> 
I think that the standard way to do it with Sphinx involves something like 
:func:`function_name` which isn't very automatic, so for that reason and 
because our man pages are generated separately, we currently have our custom 
plugin which adds the links in code snippets (for HTML only, not in the PDF), 
using the "htmlmap" generated by the "classic" docs build. My hope is that we 
could modify the plugin to create the correct internal links, once man pages 
are better integrated.

> Barry
> 
> 
> 
> 
> 
> 
>> On May 26, 2021, at 11:36 AM, Patrick Sanan  wrote:
>> 
>> 
>> 
>>> Am 25.05.2021 um 22:58 schrieb Barry Smith :
>>> 
>>> 
>>> Now that the users manual is html and we can properly link into it, it 
>>> would be great to have links from the manual pages to appropriate locations 
>>> in the users manual. For example SNESSetFunction.html would have a link to 
>>> the generated Spinx location where SNESSetFunction is discussed.
>>> 
>>> How do we go about doing this? 
>>> 
>>> Not only is this useful for users but when developers are fixing/improving 
>>> a manual page it would be nice if they had a way to jump directly to the 
>>> appropriate place in the xxx.rst that that discusses the manual page to 
>>> check that that material is also up-to-date and correct. So I guess we need 
>>> a way to link to the correct place in the .rst and the generated .html
>>> 
>> This all depends on which approach we take to make the man pages better 
>> integrated. There are competing requirements so I think it'll have to be 
>> hashed out to find the correct compromise
>> 
>> - we need to leave things for Sowing to generate Fortran stubs
>> - we want to be able to write the man pages as .rst, like the rest of the 
>> Sphinx docs
>> - we want the man pages inline with the source
>> - we don't want to have to manually update all the man pages
>> - we want to avoid introducing brittle scripting, if possible
>> 
>>> 
>>> Thanks
>>> 
>>>  Barry
>>> 
>>> In the old users manual I had it rigged to have a link to the manual page 
>>> for every occurrence of a word that had a manual page in the users manual. 
>>> Is that feature lost now? Is there anyway to bring it back?
>>> 
>>> 
>> 
>> This is lost, I think. What sorts of words were these? Once we have links 
>> from the man pages to the manual, as above, would it be just as good to 
>> directly link to sections of the manual?
> 



Re: [petsc-dev] links from manual pages to users manual

2021-05-26 Thread Patrick Sanan



> Am 25.05.2021 um 22:58 schrieb Barry Smith :
> 
> 
>   Now that the users manual is html and we can properly link into it, it 
> would be great to have links from the manual pages to appropriate locations 
> in the users manual. For example SNESSetFunction.html would have a link to 
> the generated Spinx location where SNESSetFunction is discussed.
> 
>   How do we go about doing this? 
> 
>   Not only is this useful for users but when developers are fixing/improving 
> a manual page it would be nice if they had a way to jump directly to the 
> appropriate place in the xxx.rst that that discusses the manual page to check 
> that that material is also up-to-date and correct. So I guess we need a way 
> to link to the correct place in the .rst and the generated .html
> 
This all depends on which approach we take to make the man pages better 
integrated. There are competing requirements so I think it'll have to be hashed 
out to find the correct compromise

- we need to leave things for Sowing to generate Fortran stubs
- we want to be able to write the man pages as .rst, like the rest of the 
Sphinx docs
- we want the man pages inline with the source
- we don't want to have to manually update all the man pages
- we want to avoid introducing brittle scripting, if possible

> 
>   Thanks
> 
>Barry
> 
> In the old users manual I had it rigged to have a link to the manual page for 
> every occurrence of a word that had a manual page in the users manual. Is 
> that feature lost now? Is there anyway to bring it back?
> 
> 

This is lost, I think. What sorts of words were these? Once we have links from 
the man pages to the manual, as above, would it be just as good to directly 
link to sections of the manual? 

Re: [petsc-dev] git worktree

2021-05-19 Thread Patrick Sanan
Cool - I didn't know about this approach - If you still have your experiments 
sitting around, can you put numbers on what kind of space savings are we 
talking about vs the dumb approach (I have an independent clone for every 
branch I'm interested in working on simultaneously)?

@Barry - thanks for the reminder about that script - even if I don't use it 
regularly it's good to know it's there to raid in the future when I'm pushed in 
desperation to start scripting things. 

Re the related shallow/"blobless" clone stuff I was posting about [1] - it's 
fun and work-adjacent (hence in #random) to read about and good to be able to 
pull out of your pocket when some pathological repo comes along, but the boring 
truth is that because it's another syntax to remember (or script) and there's a 
minor inconvenience in the usage (I don't like the way it behaves when you need 
to fetch something missing, and there's no internet connection), I'll likely 
never use the feature in my normal workflow. The robustness, simplicity, and 
google-ability of the dumb way are too attractive!


[1] 
https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
 

 


> Am 19.05.2021 um 05:54 schrieb Scott Kruger :
> 
> 
> 
> A.  I remember your email about it, and I even have it checked out.
> I didn't get it at the time, but necessity is not only them other of
> invention, but of learning.
> 
> Scott
> 
> On 2021-05-18 18:39, Barry Smith did write:
>> 
>>  Scott,
>> 
>>My solution to working with multiple PETSc branches without the agonizing 
>> pain is g...@gitlab.com:petsc/petscgitbash.git 
>> 
>>One could argue it is too particular to Barry's specific workflow but 
>> perhaps it has ideas/code that can be stolen for others. It could also 
>> potentially be done using the gitlab python bindings and thus remove the 
>> direct use of the rest full interface.  I have been using it for about a 
>> year and a half and probably for about six months it has been pretty robust 
>> and stable. A reminder of its approach
>> 
>> #  An alias for git that manages working with multiple branches of PETSc 
>> from the command line
>> #This is specific to PETSc and not useful for any other respositories
>> #
>> #Replaces some actions that normally require cut-and-paste and/or 
>> (manually) opening the browser to gitlab.com
>> #
>> #+ Sets the PETSC_ARCH based on the branch name
>> #+ Preserves compiled code associated with the branch checked out when 
>> changing branches
>> #+ Updates lib/petsc/conf/petscvariables with the branch values so, for 
>> example, you can compile in Emacs without knowing the PETSC_ARCH in Emacs
>> #+ Creates new branches with the name 
>> ${PETSC_GIT_BRANCH_PREFIX}/DATE/yourspecificbranchname
>> #+ Adds /release to branch name if created from release branch
>> #+ Can checkout branches based on a partial branch name, if multiple 
>> branches contain the string it lists the possibilites
>> #+ Submits branches to pipeline testing from the command line
>> #+ Checks the current branches latest pipeline test results (and 
>> optionally opens the browser to the pipeline)
>> #+ Opens new or current MR without cut and paste from the branches
>> #
>> #Oana suggested the idea to save waiting for code to recompile after 
>> changing branches and the use of touch
>> #to force code to not get recompiled unnecessarily. This inspired this 
>> script which then grew uncontrollably.
>> #
>> #Does NOT change the source code in any way, only touches the object 
>> files
>> #
>> #Does not currently have a mechanism for multiple PETSC_ARCH for a 
>> single branch
>> #
>> #Requires git higher than 1.8  TODO: add a check for this
>> #
>> #  Usage:
>> # git checkout partialname
>> # git checkout -  check out the last 
>> branch you were on
>> # git checkout -b newbranchname [rootbranch] [message] adds 
>> ${PETSC_GIT_BRANCH_PREFIX}, date, and /release (when needed) to new base 
>> branch name
>> # The message can 
>> contain what the branch is for and who inspired it
>> # git checkout -b newbranchname [main or release]
>> # git pl   [partialname]  run a GitLab pipeline
>> # git cpl  [-show] [partialname]  check on status of 
>> pipeline
>> # git mr [-f] [partialname]   open new or current MR 
>> for current branch, -f allows MR without first submitting pipeline
>> # git branch -D[D] [partialname]  deletes branch you may 
>> be currently in, extra D deletes remote also
>> # git rebase [partialname]pulls main or release 
>> as appropriate and then rebases against it
>> # git brancheslists 

Re: [petsc-dev] empty space on left side of website pages

2021-04-26 Thread Patrick Sanan
Part of the reason that the huge, empty sidebar looks so bad is that it's on 
the front page - in the interests of simplicity and focusing on getting the 
build stable, I haven't been focusing on it, but there is still discussion to 
be had about whether and how to make a prettier front/landing page.

> Am 26.04.2021 um 21:05 schrieb Jed Brown :
> 
> The sphinx-pydata-theme has great mobile support and lots of development 
> energy behind it. I don't want to switch themes again based on a sidebar 
> sizing concern. If the sidebar width is super important, we can adjust the 
> CSS. The standard CSS has this, which I think is what we'd want to adjust.
> 
> .container-xl {
>  max-width: 1400px
> }
> 
> Scott Kruger  writes:
> 
>> Rather than have us edit the CSS, perhaps just getting people to agree
>> to a different theme:
>> https://sphinx-themes.org/
>> 
>> I think alabaster, aiohttp, cloud_sptheme, ...  meet Barry's complaint.
>> 
>> There is a lot to like on the kotti_docs_theme for example although the
>> bar is on the right instead of the left.
>> 
>> Scott
>> 
>> On 2021-04-26 08:58, Patrick Sanan did write:
>>> As far as I know (which isn't very far, with web stuff), changing things on 
>>> that level requires somehow getting into CSS.
>>> 
>>> For instance, you can see what it looks like with other widths directly 
>>> from Firefox (fun, didn't know you could do this):
>>> - go to the page
>>> - hit F12
>>> - click around on the left to find the  that corresponds to the part 
>>> you care about
>>> - look in the middle column to find the piece of CSS that's controlling 
>>> things (here, something called .col-md-3)
>>> - edit the CSS - in attached screenshot I change the max width of that 
>>> sidebar to 5%.
>>> 
>>> But, I want to avoid having to do things on the level of CSS and HTML - I 
>>> think that should be done as a collective effort in maintaining the theme 
>>> (and Sphinx itself).
>>> If we really care enough about the width of that sidebar, we'll create a 
>>> fork of the theme, add a setting for it, and try to get it merged to the 
>>> theme's release branch.
>>> 
>>> 
>>>> Am 23.04.2021 um 23:12 schrieb Barry Smith :
>>>> 
>>>> 
>>>>   Thanks. Even if we just leave it is there a way to make it a little 
>>>> "skinnier", it seems very wide in my default browser.
>>>> 
>>>> 
>>>> 
>>>>> On Apr 23, 2021, at 1:08 PM, Patrick Sanan >>>> <mailto:patrick.sa...@gmail.com>> wrote:
>>>>> 
>>>>> It is possible to put things there, as in this link which is both 
>>>>> documentation and example:
>>>>> https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#the-left-sidebar
>>>>>  
>>>>> <https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#the-left-sidebar>
>>>>> 
>>>>> Other projects using this theme have the mostly-empty left sidebar:
>>>>> https://numpy.org/doc/stable/ <https://numpy.org/doc/stable/>
>>>>> https://jupyter.readthedocs.io/en/latest/ 
>>>>> <https://jupyter.readthedocs.io/en/latest/>
>>>>> 
>>>>> (They also have fancier landing pages, though, which we have been 
>>>>> discussing).
>>>>> 
>>>>> 
>>>>> It goes away on mobile devices or small windows, at least.
>>>>> 
>>>>> 
>>>>>> Am 23.04.2021 um 19:21 schrieb Barry Smith >>>>> <mailto:bsm...@petsc.dev>>:
>>>>>> 
>>>>>> 
>>>>>>  There is a lot of empty space on the left side of the website pages; 
>>>>>> under the Search slot.  Does this empty left side need to be so large, 
>>>>>> seems to waste a lot of the screen?
>>>>>> 
>>>>>>  Barry
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> -- 
>> Scott Kruger
>> Tech-X Corporation   kru...@txcorp.com
>> 5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
>> Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] empty space on left side of website pages

2021-04-23 Thread Patrick Sanan
It is possible to put things there, as in this link which is both documentation 
and example:
https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#the-left-sidebar

Other projects using this theme have the mostly-empty left sidebar:
https://numpy.org/doc/stable/ 
https://jupyter.readthedocs.io/en/latest/ 


(They also have fancier landing pages, though, which we have been discussing).


It goes away on mobile devices or small windows, at least.


> Am 23.04.2021 um 19:21 schrieb Barry Smith :
> 
> 
>   There is a lot of empty space on the left side of the website pages; under 
> the Search slot.  Does this empty left side need to be so large, seems to 
> waste a lot of the screen?
> 
>   Barry
> 



Re: [petsc-dev] commenting on/asking questions on documentation pages

2021-04-23 Thread Patrick Sanan


> Am 23.04.2021 um 04:45 schrieb Barry Smith :
> 
> 
>I can edit documentation pages directly from the page now, this is totally 
> awesome but I see no button to comment or ask questions on a page. 
> 
>I think every page should, by the edit button, have a "Comment, ask 
> questions" button that anyone can click on to make a comment or ask a 
> question about the page. It would be super fantastic if they could refer to 
> particular people in their comments but perhaps that is too difficult. 


> For example I am looking at 
> https://petsc.gitlab.io/-/petsc/-/jobs/1204309863/artifacts/public/overview/features.html
>  
> 
>  and I immediately want to ask 
> 
> Where is the TS solver table in the list of solver tables?
> 
> Barry
> 
> Note the pre-historic PETSc html manual pages which everyone despises 
> have a button in the upper right hand corner to report problems/ask questions 
> so what I am asking for is not unprecedented. Our old code uses email which 
> is not ideal but not ideal is better than not. Surely modern systems like 
> Sphinx have this support built in?
> 

I think the intended way to do this with our Sphinx template would be to add 
custom HTML templates, which can then be added to the sidebar.
https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#add-your-own-html-templates-to-theme-sections

 I'm worried that this involves too much scripting and customization, though. 
For example here's the way the "edit this page" link is done:
https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/_templates/edit-this-page.html

Doesn't seem too bad but it relies on a pretty big chunk of Python as well:
https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/__init__.py#L438
 




I'll open an issue on this, though, since it's entirely possible that someone 
else (or me, later) will think of a simple way to make this work, as it would 
indeed be a great feature.

Re: [petsc-dev] Regression in manualpages generation?

2021-04-03 Thread Patrick Sanan
I think this was likely a non-portable regex that was being used to match the 
function signatures,
causing it to miss anything with an internal "s" in the function name. Here's a 
quick fix for the release branch:

https://gitlab.com/petsc/petsc/-/merge_requests/3813

(This sort of thing will become easier to work with, soon, as we'll have an 
automated doc build which can be examined for each MR.)

P.S. nice email address! 

> Am 03.04.2021 um 11:19 schrieb Patrick Sanan :
> 
> That isn't expected - thanks for pointing this out! I can reproduce locally 
> so will take a look. 
> 
>> Am 03.04.2021 um 11:10 schrieb Pierre Jolivet > <mailto:pie...@joliv.et>>:
>> 
>> Hello,
>> https://www.mcs.anl.gov/petsc/petsc-3.14/docs/manualpages/Mat/MatTranspose.html
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-3.14/docs/manualpages/Mat/MatTranspose.html>
>>  lists available implementations while 
>> https://www.mcs.anl.gov/petsc/petsc-3.15/docs/manualpages/Mat/MatTranspose.html
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-3.15/docs/manualpages/Mat/MatTranspose.html>
>>  doesn’t.
>> Is this expected?
>> 
>> Thanks,
>> Pierre
> 



Re: [petsc-dev] Regression in manualpages generation?

2021-04-03 Thread Patrick Sanan
That isn't expected - thanks for pointing this out! I can reproduce locally so 
will take a look. 

> Am 03.04.2021 um 11:10 schrieb Pierre Jolivet :
> 
> Hello,
> https://www.mcs.anl.gov/petsc/petsc-3.14/docs/manualpages/Mat/MatTranspose.html
>  
> 
>  lists available implementations while 
> https://www.mcs.anl.gov/petsc/petsc-3.15/docs/manualpages/Mat/MatTranspose.html
>  
> 
>  doesn’t.
> Is this expected?
> 
> Thanks,
> Pierre



Re: [petsc-dev] [petsc-users] [EXTERNAL] Re: Question about periodic conditions

2021-04-02 Thread Patrick Sanan
Here's a simplistic solution, but one which might provide the most users with 
what they're expecting, which I think is important for a component like DMStag.

It simply changes the behavior of the functions to set uniform coordinates to 
set all local coordinates, including ghosts, using linear extrapolation.
In the periodic case, that means that the redundant boundary coordinates are 
what you would expect (in this case, 2pi insteaf of 0).
https://gitlab.com/petsc/petsc/-/merge_requests/3804 



> Am 01.04.2021 um 16:37 schrieb Jed Brown :
> 
> Matthew Knepley  writes:
> 
>> We use it to identify that the mesh is periodic and in what directions, and
>> use the length if we have to figure out the coordinates.
>> 
>>> Jed may argue that he wants you to retain the far point and use L2G to
>>> eliminate it, but that sounds like a lot more work.
>>> 
>>> Computational work or implementation work?
>>> 
>> 
>> implementation.
> 
> I think this is more implicit DMPlex assumptions than a fundamental issue.
> 
>>> I already have logic in DMStag to support something related, which was
>>> required for L2G with INSERT_VALUES in the periodic, 1-rank case, where
>>> multiple local points can respond to the same global point. So building the
>>> local coordinates with the far point and then doing L2G doesn't sound bad
>>> to implement, at least.
>>> 
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMSTAG/DMStagPopulateLocalToGlobalInjective.html
>>> 
>> 
>> That may work. It is a problem in Plex because that strategy destroys the
>> topological queries. However, it sounds like you do not have that problem.
> 
> Patrick, if you're still in early stages, I would encourage you to support 
> nodally affine maps in the L2G, which allows things like rotated periodicity 
> for vector-valued fields.
> 
> https://gitlab.com/petsc/petsc/-/issues/566



Re: [petsc-dev] [petsc-users] [EXTERNAL] Re: Question about periodic conditions

2021-03-31 Thread Patrick Sanan


> Am 31.03.2021 um 12:11 schrieb Matthew Knepley :
> 
> On Wed, Mar 31, 2021 at 3:21 AM Patrick Sanan  <mailto:patrick.sa...@gmail.com>> wrote:
> (moving to petsc-dev) 
> 
> To follow up further on this, Matt is correct as to what's happening now, but 
> periodic coordinates aren't sufficiently supported  yet in DMStag, so I will 
> add something.
> 
> The way things are set up now has a conceptual elegance to it, in that to 
> define coordinates, you use another DM which has coordinate information on 
> it, instead of other field information. It's periodic iff the primary DM is. 
> So there is no point on the right boundary, at 2 * pi in the 1D version of 
> this example, because that point would be identical to the point at 0, on the 
> left boundary.
> 
> The problem with the current implementation (for DMStag) is that the right 
> boundary of the domain [0, 2*pi) is never stored. There's no way to know the 
> width of the last cell on the right. You need that information for at least 
> two important reasons:
> 1. to visualize the mesh, where even though the boundary point is the same 
> point on the torus, you are plotting it on the plane and want different 
> representations of the point on the left and right.
> 2.  to use PIC methods (DMSwarm), where we need a way to determine if a 
> particle is in the last cell.
> 
> Matt, Mark, Dave, et. al, it'd be very helpful to know if the following seems 
> like a good/bad idea to you, since I assume you resolved this same issue for 
> DMPlex + DMSwarm:
> 
> A tempting way to proceed here is to use the existing DMSetPeriodicity(), 
> which allows you to specify that missing piece of information and store it in 
> the DM. This could be called from the DMStagSetUniformCoordinatesXXX() 
> functions, so the user wouldn't have to worry about it in that case. That 
> also makes conceptual sense as that's the stage, after setup, in which you 
> specify the "embedding" part of the DM. A next step would be to make 
> DMLocalizeCoordinates() work for DMStag (and DMDA if possible, while I'm at 
> it). 
> 
> That is what I added that stuff for. In Plex, in order to generalize to 
> situations not in a box, we went to a formulation that uses a DG coordinate 
> field instead. I think that
> is overkill here and would not give you any added functionality.
> 
Ah, cool, I was wondering what the coordinate field function was for. Does the 
DMSetPeriodicity() stuff get used anymore, now that you have a different 
approach? Quickly looking it seems like it gets passed around as you create DMs 
from existing ones, and it's perhaps used in some Plex output functionality.

> Jed may argue that he wants you to retain the far point and use L2G to 
> eliminate it, but that sounds like a lot more work.


Computational work or implementation work?

I already have logic in DMStag to support something related, which was required 
for L2G with INSERT_VALUES in the periodic, 1-rank case, where multiple local 
points can respond to the same global point. So building the local coordinates 
with the far point and then doing L2G doesn't sound bad to implement, at least.
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMSTAG/DMStagPopulateLocalToGlobalInjective.html
 
<https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMSTAG/DMStagPopulateLocalToGlobalInjective.html>


> 
>   THanks,
> 
> Matt
>  
>> Am 25.03.2021 um 00:20 schrieb Matthew Knepley > <mailto:knep...@gmail.com>>:
>> 
>> On Wed, Mar 24, 2021 at 7:17 PM Jorti, Zakariae via petsc-users 
>> mailto:petsc-us...@mcs.anl.gov>> wrote:
>> Hi Patrick,
>> 
>> 
>> 
>> Thanks for your responses. 
>> 
>> As for the code, I was not granted permission to share it yet. So, I cannot 
>> send it to you for the moment. I apologize for that.
>> 
>> 
>> 
>> I wanted to let you know that while I was testing my code, I discovered that 
>> when the periodic boundary conditions are activated, the coordinates 
>> accessed might be incorrect on one side of the boundary. 
>> 
>> Let me give you an example in cylindrical coordinates with a 3x3x3 DMStag 
>> mesh:  
>> 
>>  
>> 
>> 
>> 
>> 
>> PetscInt  startr,startphi,startz,nr,nphi,nz,d;
>> 
>> 
>> PetscInt  er,ephi,ez,icErmphip[3];
>> 
>> 
>> DMdmCoorda, coordDA;
>> 
>> Vec   coordaLocal;
>> 
>> PetscScalar   arrCoord;
>> 
>> PetscScalar   surf;
>> 
>> 
>> 
>> DMStagCreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,3,3,3,PETSC_DECIDE,PETSC_DECID

Re: [petsc-dev] [petsc-users] [EXTERNAL] Re: Question about periodic conditions

2021-03-31 Thread Patrick Sanan
(moving to petsc-dev) 

To follow up further on this, Matt is correct as to what's happening now, but 
periodic coordinates aren't sufficiently supported  yet in DMStag, so I will 
add something.

The way things are set up now has a conceptual elegance to it, in that to 
define coordinates, you use another DM which has coordinate information on it, 
instead of other field information. It's periodic iff the primary DM is. So 
there is no point on the right boundary, at 2 * pi in the 1D version of this 
example, because that point would be identical to the point at 0, on the left 
boundary.

The problem with the current implementation (for DMStag) is that the right 
boundary of the domain [0, 2*pi) is never stored. There's no way to know the 
width of the last cell on the right. You need that information for at least two 
important reasons:
1. to visualize the mesh, where even though the boundary point is the same 
point on the torus, you are plotting it on the plane and want different 
representations of the point on the left and right.
2.  to use PIC methods (DMSwarm), where we need a way to determine if a 
particle is in the last cell.

Matt, Mark, Dave, et. al, it'd be very helpful to know if the following seems 
like a good/bad idea to you, since I assume you resolved this same issue for 
DMPlex + DMSwarm:

A tempting way to proceed here is to use the existing DMSetPeriodicity(), which 
allows you to specify that missing piece of information and store it in the DM. 
This could be called from the DMStagSetUniformCoordinatesXXX() functions, so 
the user wouldn't have to worry about it in that case. That also makes 
conceptual sense as that's the stage, after setup, in which you specify the 
"embedding" part of the DM. A next step would be to make 
DMLocalizeCoordinates() work for DMStag (and DMDA if possible, while I'm at 
it). 



> Am 25.03.2021 um 00:20 schrieb Matthew Knepley :
> 
> On Wed, Mar 24, 2021 at 7:17 PM Jorti, Zakariae via petsc-users 
> mailto:petsc-us...@mcs.anl.gov>> wrote:
> Hi Patrick,
> 
> 
> 
> Thanks for your responses. 
> 
> As for the code, I was not granted permission to share it yet. So, I cannot 
> send it to you for the moment. I apologize for that.
> 
> 
> 
> I wanted to let you know that while I was testing my code, I discovered that 
> when the periodic boundary conditions are activated, the coordinates accessed 
> might be incorrect on one side of the boundary. 
> 
> Let me give you an example in cylindrical coordinates with a 3x3x3 DMStag 
> mesh:  
> 
>  
> 
> 
> 
> 
> PetscInt  startr,startphi,startz,nr,nphi,nz,d;
> 
> 
> PetscInt  er,ephi,ez,icErmphip[3];
> 
> 
> DMdmCoorda, coordDA;
> 
> Vec   coordaLocal;
> 
> PetscScalar   arrCoord;
> 
> PetscScalar   surf;
> 
> 
> 
> DMStagCreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,3,3,3,PETSC_DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1,1,1,DMSTAG_STENCIL_BOX,1,NULL,NULL,NULL,);
> 
> 
> 
> DMSetFromOptions(coordDA);
> 
> DMSetUp(coordDA);
> 
> 
> 
> DMStagGetCorners(coordDA,,,NULL,NULL,NULL);
> 
> 
> 
> DMGetCoordinateDM(coordDA,);
> 
> DMGetCoordinatesLocal(coordDA,);
> 
>   DMStagVecGetArrayRead(dmCoorda,coordaLocal,);
> 
> 
> 
> for (d=0; d< 3; ++d){
> 
> DMStagGetLocationSlot(dmCoorda,UP_LEFT,d,[d]);
> 
> }
> 
> 
> 
> er = 1; ez = 0;
> 
> for (ephi=0; ephi< 3; ++ephi){
> 
>  PetscPrintf(PETSC_COMM_WORLD,"Phi_p(%d,%d,%d) = 
> %E\n",er,ephi,ez,(double)arrCoord[ez][ephi][er][icErmphip[1]);
> 
> }
> 
> 
> 
> When I execute this example, I get this output:
> 
> Phi_p(1,0,0) = 2.094395E+00
> 
> Phi_p(1,1,0) = 4.188790E+00
> 
> Phi_p(1,2,0) = 0.00E+00
> 
> 
> 
> Note here that the first two lines correspond to 2π / 3 and 4π / 3 
> respectively. Thus, nothing is wrong here. 
> 
> But the last line should rather give 2π instead of 0.
> 
> 
> 
> I understand that degrees of freedom should be the same on both sides of the 
> boundary, but should the coordinates not be preserved? 
> 
> 
> I don't think so. The circle has coordinates in [0, 2\pi), so the point at 
> 2\pi is identified with the point at 0 and you must choose
> one, so we choose 0.
> 
>   Thanks,
> 
>  Matt 
> Thank you.
> 
> Best regards,
> 
> 
> 
> Zakariae Jorti
> 
> From: Patrick Sanan mailto:patrick.sa...@gmail.com>>
> Sent: Tuesday, March 23, 2021 11:37:04 AM
> To: Jorti, Zakariae
> Cc: petsc-us...@mcs.anl.gov <mailto:petsc-us...@mcs.anl.gov>
> Subject: [EXTERNAL] Re: Question about periodic conditions
>  
> Hi Zakariae - sorry about the delay - responses inline bel

Re: [petsc-dev] petsc release plan for march 2021

2021-03-29 Thread Patrick Sanan
I added that milestone to some of the current docs MRs but it’s probably
too tight, so I suggest to remove - it probably doesn’t matter much what’s
in the tar all for docs so safer to have the current docs. We could have a
patch release which updates the docs for release, once we’re happy with the
docs build on main.

Satish Balay via petsc-dev  schrieb am So. 28. März
2021 um 20:58:

> Perhaps I should not have kept a weekend deadline here.
>
> Lets use 'freeze': 'March 29 (Mon) 5PM CST' - but retain the release date
> 'March 30 5PM EST (we have March 31 - if needed)
>
> Satish
>
>  On Sun, 28 Mar 2021, Satish Balay via petsc-dev wrote:
>
> > A reminder!
> >
> > Satish
> >
> > On Tue, 9 Mar 2021, Satish Balay via petsc-dev wrote:
> >
> > > All,
> > >
> > > Its time for another PETSc release - due end of March.
> > >
> > > For this release [3.15], will work with the following dates:
> > >
> > > - feature freeze: March 28 say 5PM EST
> > > - release: March 30 say 5PM EST
> > >
> > > Merges after freeze should contain only fixes that would normally be
> acceptable to release workflow.
> > >
> > > I've created a new milestone 'v3.15-release'. So if you are working on
> a MR with the goal of merging before release - its best to use this tag
> with the MR.
> > >
> > > And it would be good to avoid merging large changes at the last
> minute. And not have merge requests stuck in need of reviews, testing and
> other necessary tasks.
> > >
> > > And I would think the testing/CI resources would get stressed in this
> timeframe - so it would be good to use them judiciously if possible.
> > >
> > > - if there are failures in stage-2 or 3 - and its no longer necessary
> to complete all the jobs - one can 'cancel' the pipeline.
> > > - if a fix needs to be tested - one can first test with only the
> failed jobs (if this is known) - before doing a full test pipeline. i.e:
> > >- use the automatically started and paused 'merge-request' pipeline
> (or start new 'web' pipeline, and cancel it immediately)
> > >- now toggle only the jobs that need to be run
> > >- [on success of the selected jobs] if one wants to run the full
> pipeleine - click 'retry' - and the remaining canceled jobs should now get
> scheduled.
> > >
> > > Thanks,
> > > Satish
> > >
> >
>
>


Re: [petsc-dev] configureLibrary fails for c++11 projects

2021-03-23 Thread Patrick Sanan
I had a related (I think) issue trying to build with Kokkos. Those headers 
throw an #error if they're expecting OpenMP and the compiler doesn't have the 
OpenMP flag. I have an open MR here (number 60^2!) which thus adds the OpenMP 
flag to the CXXPPFLAGS: 
https://gitlab.com/petsc/petsc/-/merge_requests/3600 



My collaborator at CSCS was testing with the latest Kokkos and ran into an even 
hairier version of this problem trying to use CUDA - the Kokkos headers now 
apparently check that you're using nvcc. He has some workaround which I'll 
review and hopefully be able to submit. 


> Am 23.03.2021 um 17:04 schrieb Stefano Zampini :
> 
> The check fails within buildsystem when running mpicc -E (which uses 
> CXXPPFLAGS)  The package header needs c++11  to be included properly. C++11 
> is also needed at preprocessing time
> 
> Il Mar 23 Mar 2021, 18:59 Satish Balay  > ha scritto:
> -std=cxx11 for sure is a compile flag. But don't really know if its
> also needed at pre-process stage and/or at link stage.
> 
> And for compile stage both CXXFLAGS and CXXPPFLAGS should get
> used. [PETSc makefiles make sure this is the case]
> 
> And for link stage CXXFLAGS and LDFLAGS get used [but then sometimes
> we have CLINKER, and FLINKER - and they certainly don't use CXXFLAGS -
> so -std=cxx11 isn't really needed at link time?
> 
> So the previous default of CXXPPFLAGS=-std=cxx11 looks reasonable to me.
> 
> However if this project is not using PETSc makefiles - it should make sure 
> all compile flags are grabbed.
> 
> # lib/petsc/conf/variables
> PETSC_CXXCPPFLAGS   = ${PETSC_CC_INCLUDES} ${PETSCFLAGS} ${CXXPP_FLAGS} 
> ${CXXPPFLAGS}
> CXXCPPFLAGS = ${PETSC_CXXCPPFLAGS}
> PETSC_CXXCOMPILE_SINGLE = ${CXX} -o $*.o -c ${CXX_FLAGS} ${CXXFLAGS} 
> ${CXXCPPFLAGS}
> 
> # lib/petsc/conf/rules
> .cpp.o .cxx.o .cc.o .C.o:
> ${PETSC_CXXCOMPILE_SINGLE} `pwd`/$<
> 
> # gmakefile.test
> PETSC_COMPILE.cxx = $(call quiet,CXX) -c $(CXX_FLAGS) $(CXXFLAGS) 
> $(CXXCPPFLAGS) $(CXX_DEPFLAGS)
> 
> # lib/petsc/conf/test
> LINK.cc = $(CXXLINKER) $(CXX_FLAGS) $(CXXFLAGS) $(CXXCPPFLAGS) $(LDFLAGS)
> 
> Satish
> 
> 
> On Tue, 23 Mar 2021, Junchao Zhang wrote:
> 
> > I would rather directly change the project to use CXXFLAGS instead of
> > CXXPPFLAGS.
> > 
> > --Junchao Zhang
> > 
> > 
> > On Tue, Mar 23, 2021 at 10:01 AM Satish Balay via petsc-dev <
> > petsc-dev@mcs.anl.gov > wrote:
> > 
> > > On Tue, 23 Mar 2021, Stefano Zampini wrote:
> > >
> > > > Just tried out of main, and and the include tests of a c++11 project 
> > > > fail
> > > > Below my fix, if we agree on, I'll make a MR
> > > >
> > > > diff --git a/config/BuildSystem/config/compilers.py
> > > > b/config/BuildSystem/config/compilers.py
> > > > index c96967e..44e4657 100644
> > > > --- a/config/BuildSystem/config/compilers.py
> > > > +++ b/config/BuildSystem/config/compilers.py
> > > > @@ -527,6 +527,8 @@ class Configure(config.base.Configure):
> > > >  if self.setCompilers.checkCompilerFlag(flag, includes,
> > > > body+body14):
> > > >newflag = getattr(self.setCompilers,LANG+'FLAGS') + ' ' +
> > > flag #
> > > > append flag to the old
> > > >setattr(self.setCompilers,LANG+'FLAGS',newflag)
> > > > +  newflag = getattr(self.setCompilers,LANG+'PPFLAGS') + ' ' +
> > > flag
> > > > # append flag to the old
> > > > +  setattr(self.setCompilers,LANG+'PPFLAGS',newflag)
> > >
> > >
> > > https://gitlab.com/petsc/petsc/commit/ead1aa4045d7bca177e78933b9ca25145fc3c574
> > >  
> > > 
> > >
> > >   self.setCompilers.CXXPPFLAGS += ' ' + flag
> > >   newflag = getattr(self.setCompilers,LANG+'FLAGS') + ' ' + flag #
> > > append flag to the old
> > >   setattr(self.setCompilers,LANG+'FLAGS',newflag)
> > >
> > > So the old code was setting 'PPFLAGS' - but this commit changed to
> > > 'FLAGS'. Maybe this flag is needed at both compile time and link time?
> > >
> > > So this project is somehow using CXXPPFLAGS - but not CXXFLAGS?
> > >
> > > I'm fine with adding it to PPFLAGS - duplicate listing hopefully shouldn't
> > > cause grief.
> > >
> > > Satish
> > >
> > > >cxxdialect = 'C++14'
> > > >self.addDefine('HAVE_'+LANG+'_DIALECT_CXX14',1)
> > > >self.addDefine('HAVE_'+LANG+'_DIALECT_CXX11',1)
> > > > @@ -546,6 +548,8 @@ class Configure(config.base.Configure):
> > > >  if self.setCompilers.checkCompilerFlag(flag, includes, body):
> > > >newflag = getattr(self.setCompilers,LANG+'FLAGS') + ' ' +
> > > flag #
> > > > append flag to the old
> > > >setattr(self.setCompilers,LANG+'FLAGS',newflag)
> > > > +  newflag = getattr(self.setCompilers,LANG+'PPFLAGS') + ' ' +
> > > flag
> > > > # append flag to the old
> > > > +  

Re: [petsc-dev] MatTransposeMatMult() bug

2021-03-18 Thread Patrick Sanan
Sorry about the current mess but that page is halfway migrated, so any updates 
should go here:
https://docs.petsc.org/en/main/install/externalsoftware_documentation/ 




> Am 18.03.2021 um 15:22 schrieb Zhang, Hong via petsc-dev 
> :
> 
> Pierre,
> This is an external package to petsc. Shall it be listed at
> https://www.mcs.anl.gov/petsc/miscellaneous/external.html 
> ?
> PETSc: External Software - anl.gov 
> 
> PETSc interfaces to the following optional external software (installing 
> packages) (manual pages):. AMD - Approximate minimum degree orderings.; BLAS 
> and LAPACK; Chaco - a graph partitioning package.; ESSL - IBM's math library 
> for fast sparse direct LU factorization. FFTW - Fastest Fourier Transform in 
> the West, developed at MIT by Matteo Frigo and Steven G. Johnson.
> www.mcs.anl.gov 
> Hong
> From: Pierre Jolivet 
> Sent: Thursday, March 18, 2021 1:16 AM
> To: Zhang, Hong 
> Cc: For users of the development version of PETSc 
> Subject: Re: [petsc-dev] MatTransposeMatMult() bug
>  
> https://www.sciencedirect.com/science/article/abs/pii/S089812212155 
> 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPHPDDM.html
>  
> 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCHPDDM.html 
> 
> I need to update the PETSc user manual though, specifically with respect to 
> systems with multiple right-hand sides.
> But don’t worry, Stefano has sorted the bug out, which was due to a faulty 
> MatSetFromOptions() in MatMAIJ, used by MatTransposeMatMult().
> 
> Thanks,
> Pierre
> 
>> On 17 Mar 2021, at 11:21 PM, Zhang, Hong > > wrote:
>> 
>> What is hpddm? I do not see its document.
>> Hong
>> 
>> From: Matthew Knepley mailto:knep...@gmail.com>>
>> Sent: Wednesday, March 17, 2021 2:49 PM
>> To: Zhang, Hong mailto:hzh...@mcs.anl.gov>>
>> Cc: Pierre Jolivet mailto:pie...@joliv.et>>; For users of 
>> the development version of PETSc > >
>> Subject: Re: [petsc-dev] MatTransposeMatMult() bug
>>  
>> On Wed, Mar 17, 2021 at 3:27 PM Zhang, Hong via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Pierre,
>> Do you mean a possible bug in C=AtB MatTransposeMatMult()?
>> Can you provide a stand-alone test without hpddm that reproduces this error? 
>> 
>> Hong, you should be able to just configure with --download-hpddm and then 
>> run that ex76 test.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> Hong
>> From: petsc-dev > > on behalf of Pierre Jolivet 
>> mailto:pie...@joliv.et>>
>> Sent: Wednesday, March 17, 2021 4:31 AM
>> To: For users of the development version of PETSc > >
>> Subject: [petsc-dev] MatTransposeMatMult() bug
>>  
>> Hello,
>> While trying out Stefano’s PCApplyMat_MG() code (*), we stumbled upon weird 
>> numerical errors when reusing a Mat for both MatProduct_AB and 
>> MatProduct_AtB.
>> This reminded me that there has been a long-standing issue with 
>> MatTransposeMatMult(), see 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/hpddm/hpddm.cxx.html#line608
>>  
>> ,
>>  that I never looked into.
>> I’ve now been trying to figure this out, because this has side effects in 
>> multiple places (PCMG and PCHPDDM at least), and thus could impact user-code 
>> as well?
>> With this commit: 
>> https://gitlab.com/petsc/petsc/-/commit/03d8bd538039defc2fcc3e37d523735c4aaceba0
>>  
>> 
>> +
>> $ mpirun -n 4 src/ksp/ksp/tutorials/ex76 -ksp_converged_reason -pc_type 
>> hpddm -pc_hpddm_levels_1_eps_nev 20 -ksp_type preonly -mat_type aij 
>> -load_dir ${DATAFILESPATH}/matrices/hpddm/GENEO -rhs 2 
>> -pc_hpddm_coarse_correction balanced -C_input_mattransposematmult 
>> -D_output_mattransposematmult
>> I’m seeing that C is nonzero, but D is full of zeros.
>> Mat Object: 4 MPI processes
>>   type: mpidense
>> 5.7098316584361917e-08 1.0159399260517841e-07
>> 1.5812349976211856e-07 2.0688121715350138e-07
>> 2.4887556933361981e-08 4.8111092300772958e-08
>> 1.4606298643602107e-07 1.7213611729839211e-07
>> […]
>> Mat Object: 4 MPI processes
>>   type: mpidense
>> 0.e+00 0.e+00
>> 0.e+00 0.e+00
>> 0.e+00 0.e+00
>> 0.e+00 0.e+00
>> […]
>> 
>> If one switches to a MatType which has no 

Re: [petsc-dev] integrate petsc4py tarball generation with petsc tarball generation

2021-03-15 Thread Patrick Sanan
I tried earlier but I don't know what I'm doing, so was hoping Lisandro would 
help. If it's worth anything this is what I did:

# set PETSC_DIR and PETSC_ARCH for minimal docs build as used for Sphinx 
(basically just c2html and sowing).
# had to remember to actually build the library or you get a link error with 
-lpetsc 
cd src/binding/petsc4py
# I have all the requisite packages in my base Python 3.7 conda environment (I 
think), including epydoc
make PYTHON=python PYTHON2=python PYTHON3=python docs

/Users/patrick/opt/miniconda3/bin/rst2html.py --input-encoding=utf-8 
--no-compact-lists --cloak-email-addresses ./LICENSE.rst  > docs/LICENSE.html
/Users/patrick/opt/miniconda3/bin/rst2html.py --input-encoding=utf-8 
--no-compact-lists --cloak-email-addresses ./CHANGES.rst  > docs/CHANGES.html
/Users/patrick/opt/miniconda3/bin/rst2html.py --input-encoding=utf-8 
--no-compact-lists --cloak-email-addresses docs/index.rst > docs/index.html
mkdir -p build/doctrees docs/usrman
sphinx-build -b html -d build/doctrees  \
docs/source docs/usrman
Sphinx v2.4.4 in Verwendung
loading pickled environment... erledigt
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 0 source files that are out of date
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
no targets are out of date.
build abgeschlossen.

The HTML pages are in docs/usrman.
rm -f docs/usrman/.buildinfo
python setup.py build_src
running build_src
mkdir -p docs/apiref
env CFLAGS=-O0 python setup.py -q build --build-lib build/lib.py2
env PYTHONPATH=$PWD/build/lib.py2 python -c 'import petsc4py.PETSc'
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/Users/patrick/code/petsc-doc/src/binding/petsc4py/build/lib.py2/petsc4py/PETSc.py",
 line 3, in 
PETSc = ImportPETSc(ARCH)
  File 
"/Users/patrick/code/petsc-doc/src/binding/petsc4py/build/lib.py2/petsc4py/lib/__init__.py",
 line 29, in ImportPETSc
return Import('petsc4py', 'PETSc', path, arch)
  File 
"/Users/patrick/code/petsc-doc/src/binding/petsc4py/build/lib.py2/petsc4py/lib/__init__.py",
 line 73, in Import
module = import_module(pkg, name, path, arch)
  File 
"/Users/patrick/code/petsc-doc/src/binding/petsc4py/build/lib.py2/petsc4py/lib/__init__.py",
 line 58, in import_module
with f: return imp.load_module(fullname, f, fn, info)
  File "/Users/patrick/opt/miniconda3/lib/python3.7/imp.py", line 242, in 
load_module
return load_dynamic(name, filename, file)
  File "/Users/patrick/opt/miniconda3/lib/python3.7/imp.py", line 342, in 
load_dynamic
return _load(spec)
ImportError: 
dlopen(/Users/patrick/code/petsc-doc/src/binding/petsc4py/build/lib.py2/petsc4py/lib/arch-classic-docs/PETSc.cpython-37m-darwin.so,
 2): Symbol not found: _dasum
  Referenced from: 
/Users/patrick/code/petsc-doc/arch-classic-docs/lib/libpetsc.3.014.dylib
  Expected in: flat namespace
 in /Users/patrick/code/petsc-doc/arch-classic-docs/lib/libpetsc.3.014.dylib
make: *** [epydoc-html] Error 1


> Am 15.03.2021 um 16:59 schrieb Barry Smith :
> 
> 
>Well someone tried importing epydoc for python3 and it did not generate an 
> error, did they try generating the petsc4py docs with it. That would tell you 
> if it works?
> 
>Yes, we do want to be able to build all the PETSc docs together in a 
> portable way.
> 
>> On Mar 15, 2021, at 10:56 AM, Patrick Sanan > <mailto:patrick.sa...@gmail.com>> wrote:
>> 
>> 
>> 
>>> Am 15.03.2021 um 16:26 schrieb Satish Balay >> <mailto:ba...@mcs.anl.gov>>:
>>> 
>>> On Mon, 15 Mar 2021, Lisandro Dalcin wrote:
>>> 
>>>> On Mon, 15 Mar 2021 at 07:06, Satish Balay >>> <mailto:ba...@mcs.anl.gov>> wrote:
>>>> 
>>>>> Lisandro,
>>>>> 
>>>>> For the upcoming release its best to update lib/petsc/bin/maint/builddist
>>>>> to also build petsc4py tarball in sync with petsc tarball.
>>>>> 
>>>>> What is the current process to generate petsc4py tarball?
>>>>> 
>>>>> 
>>>>> 
>>>>> BTW: I stumbled into a couple of issues with building petsc4py docs
>>>>> 
>>>>> 1. the docs build process requires petsc library to be built?
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>>>>>>> 
>>>>> python setup.py build_src
>>>>> running build_src
>>>>> cythonizing 'petsc4py.PETSc.pyx' -> 'petsc4py.PETSc.c'
>>>>> cythonizing 'libpetsc4py/libpetsc4py.pyx' -> 'libpetsc4py/libpetsc4py.c'
>>>>> mkdir -p docs/apiref
>>>

Re: [petsc-dev] integrate petsc4py tarball generation with petsc tarball generation

2021-03-15 Thread Patrick Sanan


> Am 15.03.2021 um 16:26 schrieb Satish Balay :
> 
> On Mon, 15 Mar 2021, Lisandro Dalcin wrote:
> 
>> On Mon, 15 Mar 2021 at 07:06, Satish Balay  wrote:
>> 
>>> Lisandro,
>>> 
>>> For the upcoming release its best to update lib/petsc/bin/maint/builddist
>>> to also build petsc4py tarball in sync with petsc tarball.
>>> 
>>> What is the current process to generate petsc4py tarball?
>>> 
>>> 
>>> 
>>> BTW: I stumbled into a couple of issues with building petsc4py docs
>>> 
>>> 1. the docs build process requires petsc library to be built?
>>> 
>>> 
>> 
>> 
>> 
>>> 
>>> python setup.py build_src
>>> running build_src
>>> cythonizing 'petsc4py.PETSc.pyx' -> 'petsc4py.PETSc.c'
>>> cythonizing 'libpetsc4py/libpetsc4py.pyx' -> 'libpetsc4py/libpetsc4py.c'
>>> mkdir -p docs/apiref
>>> env CFLAGS=-O0 python2 setup.py -q build --build-lib build/lib.py2
>>> /usr/bin/ld: cannot find -lpetsc
>>> collect2: error: ld returned 1 exit status
>>> error: command 'gcc' failed with exit status 1
>>> make[2]: *** [makefile:110: epydoc-html] Error 1
>>> gmake[1]: [makefile:422: sphinx-docs-all] Error 2 (ignored)
>>> <<<
>>> 
>>> 2. Any particular reason it needs python2? I see it requires
>>> docutils,epydoc - but I see python3 is able to install them.
>>> 
>>> 
>> It requires petsc4py to be installed in Python 2, such that epydoc can
>> build the API reference.
> 
> But epydoc can be installed with python3. So this is more of petsc4py code 
> that uses epydoc - than epydoc code?
> 
>> I have not found a nice replacement for epydoc-generated documentation.
>> If Python 2 is an annoyance, then just remove, comment-out anything related
>> to epydoc in "makefile"
> 
> Well I can work around it. [by installing both python2 and python3 versions 
> of epydoc via pip]
> 
> However its not clear to me where the python2 requirement is coming
> from - and if things can be unified using python3
> 
> A related issue: I think Patrick is working on migrating some of the
> petsc4py docs to sphynx. And curently sphynx is installed in python3
> virt-env [so now we need it for both python3 for petsc side, and
> python2 for petsc4py side]

I haven't dug into the petsc4py docs yet, but I was very much hoping that we 
could at least make all the docs build at once and be deployed to the same URL, 
even if for now they are building with an assortment of tools. (We could 
include the petsc4py docs, for now, as we're doing with the HTML man pages and 
sources from c2html, specifying them as "extra html" with Sphinx) . As Satish 
says, it'd be nice to do this with the same Python environment we use to build 
the Sphinx docs - I would naively hope/assume that since epydoc exists for 
python3 and petsc4py works with python3, it would be possible.

> 
>> and run "make sdist".
> 
> Ah - ok. Will check this to see if I can generate the tarball in sync
> with petsc tarball. What source files need updating for
> release/version info?
> 
>> But this way the tarball will miss the API reference.
> 
>> PS: All this could be converted to a script that installs petsc4py and
>> epydoc in a Python 2 virtual environment, and next users another venv for
>> the Python 3 Sphinx stuff.
> 
> One issue: Fedora-33 does not have 'python2-pip' anymore. I was able
> to manually install it.  However this might become an issue for others
> who want to build docs [even if we automate the docs build to
> install/use pyton2-venv]
> 
> Satish



Re: [petsc-dev] Argonne GPU Virtual Hackathon - Accepted

2021-03-13 Thread Patrick Sanan
Another thing perhaps of interest is the stencil-based GPU matrix assembly 
functionality that Mark introduced.

> Am 13.03.2021 um 07:59 schrieb Stefano Zampini :
> 
> The COO assembly is entirely based on thrust primitives, I don’t have much 
> experience to say we will get a serious speedup by writing our own kernels, 
> but it is definitely worth a try if we will end up adopting COO as entry 
> point for GPU irregular assembly.
> Jed, you mentioned BDDC deluxe, what do you mean by that? Porting 
> setup/application of deluxe scaling onto GPU?
> 
> Timings are not so bad for me joining the hackaton. 
> 
>> On Mar 13, 2021, at 8:17 AM, Barry Smith > > wrote:
>> 
>> 
>> 
>>> On Mar 12, 2021, at 10:49 PM, Jed Brown >> > wrote:
>>> 
>>> Barry Smith mailto:bsm...@petsc.dev>> writes:
>>> 
> On Mar 12, 2021, at 6:58 PM, Jed Brown  > wrote:
> 
> Barry Smith mailto:bsm...@petsc.dev>> writes:
> 
>>I think we should start porting the PetscFE infrastructure, numerical 
>> integrations, vector and matrix assembly to GPUs soon. It is dog slow on 
>> CPUs and should be able to deliver higher performance on GPUs. 
> 
> IMO, this comes via interfaces to libCEED, not rolling yet another way to 
> invoke quadrature routines on GPUs.
 
  I am not talking about matrix-free stuff, that definitely belongs in 
 libCEED, no reason to rewrite. 
 
  But does libCEED also support the traditional finite element construction 
 process where the matrices are built explicitly? Or does it provide some 
 of the code, integration points, integration formula etc. that could be 
 shared and used as a starting point? If it includes all of these 
 "traditional" things then we should definitely get it all hooked into 
 PetscFE/DMPLEX and go to town. (But yes not so much need for the GPU 
 hackathon since it is wiring more than GPU code). The way I have always 
 heard about libCEED was as a matrix-free engine, so I may have miss 
 understood. It is definitely not my intention to start a project that 
 reproduces functionality that we can just use. 
>>> 
>>> MFEM wants this too and it's in a draft libCEED PR right now. My intent is 
>>> to ensure it's compatible with Stefano's split-phase COO assembly. 
>> 
>>  Cool, would this be something that, in combination with perhaps some 
>> libCEED folk, could be incorporated in the Hackathon? Anyone can join our 
>> group Hackathon group, they don't have to have any financial connection with 
>> "PETSc". 
>> 
>>> 
  We do need solid support for traditional finite element assembly on GPUs, 
 matrix-free finite elements alone is not enough.
>>> 
>>> Agreed, and while libCEED could be further optimized for lowest order, even 
>>> naive assembly will be faster than what's in DMPlex.
> 



Re: [petsc-dev] petsc release plan for march 2021

2021-03-09 Thread Patrick Sanan



> Am 09.03.2021 um 19:58 schrieb Satish Balay via petsc-dev 
> :
> 
> All,
> 
> Its time for another PETSc release - due end of March.
> 
> For this release [3.15], will work with the following dates:
> 
> - feature freeze: March 28 say 5PM EST
> - release: March 30 say 5PM EST
> 
> Merges after freeze should contain only fixes that would normally be 
> acceptable to release workflow.
> 
> I've created a new milestone 'v3.14-release'. So if you are working on a MR 
> with the goal of merging before release - its best to use this tag with the 
> MR.
v3.15-release ?
> 
> And it would be good to avoid merging large changes at the last minute. And 
> not have merge requests stuck in need of reviews, testing and other necessary 
> tasks.
> 
> And I would think the testing/CI resources would get stressed in this 
> timeframe - so it would be good to use them judiciously if possible.
> 
> - if there are failures in stage-2 or 3 - and its no longer necessary to 
> complete all the jobs - one can 'cancel' the pipeline.
> - if a fix needs to be tested - one can first test with only the failed jobs 
> (if this is known) - before doing a full test pipeline. i.e:
>   - use the automatically started and paused 'merge-request' pipeline (or 
> start new 'web' pipeline, and cancel it immediately)
>   - now toggle only the jobs that need to be run
>   - [on success of the selected jobs] if one wants to run the full pipeleine 
> - click 'retry' - and the remaining canceled jobs should now get scheduled.
> 
> Thanks,
> Satish



Re: [petsc-dev] plan to transition to new documentation and webpages?

2021-03-07 Thread Patrick Sanan


> Am 07.03.2021 um 01:01 schrieb Barry Smith :
> 
> 
> 
>> On Mar 6, 2021, at 4:46 PM, Jed Brown > > wrote:
>> 
>> The one value-add that comes from ReadTheDocs is its version switcher, which 
>> we'd need to do ourselves.
> 
>> 
>> I've been using this strategy (for stand-alone preview) on a different 
>> project and it's working great. We can decide how to merge it (i.e., where 
>> the doc job should run).
>> 
>> https://gitlab.com/petsc/petsc/-/merge_requests/3523 
>> 
> 
>This seems fine if it builds all the docs, manual pages, HTML of source 
> code etc.  But it doesn't seem to have anything to do with ReadTheDocs? It is 
> not worth arguing over whether this special build takes place on a cloud 
> machine or an MCS machine IMHO; the only question is where the resulting 
> hundreds of megabytes of webpages end up. Can we just have them as artifacts 
> on gitlab.com/petsc/petsc  ? I am fine with 
> that if it works.
>> 
>> I think we should run https://petsc.org  from docs 
>> generated on main using Sphinx.
> 
>I think Satish's question is what about after a release? Does 
> https://petsc.org  point to docs built from release or 
> main?  Some users will want release but others will want main. Currently I 
> think we have tricks that use a combination of information from both in a 
> couple of places.
> 
>What about all the other parts of docs not built using Spinx, manual 
> pages, html source? 
> 
>> We can link to the old site for older versions. I'd be in favor of making 
>> https://mcs.anl.gov/petsc  (but not its 
>> subdirectories) redirect to petsc.org .
> 
>   That seems ok to me; but will we need to retrain Google, currently it knows 
> which is the "best" webpage for everything in PETSc (like KSPSolve etc) as 
> being at www.mcs.anl.gov/petsc /...  so we need 
> to update the docs to start from https://petsc.org/  
> 
> 
>   We need to start making decisions. They seem to be 
> 
> 1)  On what physical system are all the docs, Spinx, manual pages, html 
> source actually existing on?  
> 
> Choices?   gitlab.com  (as artifacts or something 
> else), ReadTheDocs (can it handle storing the the non-Spinx?),  ANL 
> (drawback, only someone with an account can get them onto the system), 
> something else?

As far as I can tell so far, RTD can handle the extra HTML pages (my current 
WIP builds them all, but doesn't move them over to the final location - that 
might be slow and I don't know if there's a size limit I didn't read about 
yet). 

One thing RTD gives us (I think) is a nicer search feature. That's been quite 
useful for me searching the docs, e.g. 
https://docs.petsc.org/en/latest/search/?q=pc_type 


There are definite downsides to not having as much control, though.

> 
> 2) How are all the docs (Spinx and not) built to go into the physical system 
> they will be on?
> 
> Choices?  gitlab.com  (nice, since anyone can trigger 
> their building), ReadTheDocs (again nice that anyone can trigger their 
> building, but can they do the non-Spinx easily?),   ANL machine (bad, since 
> requires someone with an account to trigger their build and/or storage). 
> something else?

My personal jury is still out on how easy it can be to do the complete build on 
RTD (see WIP MR https://docs.petsc.org/en/latest/search/?q=pc_type 
). 
> 
> 3) Does the default point completely to docs built on release? Or built on 
> main? Or built on some strange combination? 
> 
> Regardless of these questions I think we agree https://petsc.org/ 
>   (and https://mcs.anl.gov/petsc 
>  by linking) will be how people find the docs. 
> 
Yes, though maybe It's tempting to build at mcs.anl.gov/petsc 
 so that pages with the same content could even have 
the same URLs, which would be nice for any external links that exist, and 
search engines' data on the man pages.
> 
>> 
>> Barry Smith mailto:bsm...@petsc.dev>> writes:
>> 
>>>   Is the plan still to use ReadTheDocs (which does support multiple 
>>> versions of all the docs) or to "build them ourselves"? All ReadTheDocs 
>>> does is run a Sphinx document builder script the user provides and we can 
>>> do that ourselves and don't need ReadTheDocs to do it for us. In fact, if 
>>> we do it ourselves we have much more flexibility since you ware not 
>>> restricted to running only a Sphinx document builder script.
>>> 
>>>   Patrick, Jacob and others have done a fantastic job moving a lot of 
>>> material into much better pages, it seems nuts not to be using it all.
>>> 
>>>  Barry
>>> 
>>> 
 

Re: [petsc-dev] plan to transition to new documentation and webpages?

2021-03-07 Thread Patrick Sanan
Here's my WIP on having the sphinx build include a consistent set of HTML pages 
from the "classic" docs build.
https://gitlab.com/petsc/petsc/-/merge_requests/3684 
<https://gitlab.com/petsc/petsc/-/merge_requests/3684>



> Am 07.03.2021 um 08:37 schrieb Patrick Sanan :
> 
> I'm working on this right now - I think I have a workable way, where we just 
> use the "classic" docs system to build the man pages and HTML sources for 
> each version on RTD, and then specify those HTML files as "extra" to Sphinx, 
> so it just copies them over in the build and we can refer to them with local 
> links (and their internal relative links work). Hopefully I can have a demo 
> in the next couple of days.
> 
>> Am 06.03.2021 um 19:33 schrieb Satish Balay via petsc-dev 
>> :
>> 
>> This is partly due to the complexity of having some docs from 'release' and 
>> some from 'main' branches.
>> 
>> We had a way to manage this when all docs were on petsc website - but its 
>> not clear how to do this properly with readthedocs
>> 
>> Satish
>> 
>> On Fri, 5 Mar 2021, Barry Smith wrote:
>> 
>>> 
>>> What is the plan to transition to the new documentation and webpages?
>>> 
>>> I go to https://www.mcs.anl.gov/petsc/index.html 
>>> <https://www.mcs.anl.gov/petsc/index.html> and mostly see the old stuff 
>>> etc. Our next release is coming up soon and it would be nice to have 
>>> transitioned out of the old material and to the new material by/at the new 
>>> release.
>>> 
>>> What do we need to do to make this happen?
>>> 
>>>  Thanks
>>> 
>>>  Barry
>>> 
>>> 
>> 
> 



Re: [petsc-dev] plan to transition to new documentation and webpages?

2021-03-06 Thread Patrick Sanan
I'm working on this right now - I think I have a workable way, where we just 
use the "classic" docs system to build the man pages and HTML sources for each 
version on RTD, and then specify those HTML files as "extra" to Sphinx, so it 
just copies them over in the build and we can refer to them with local links 
(and their internal relative links work). Hopefully I can have a demo in the 
next couple of days.

> Am 06.03.2021 um 19:33 schrieb Satish Balay via petsc-dev 
> :
> 
> This is partly due to the complexity of having some docs from 'release' and 
> some from 'main' branches.
> 
> We had a way to manage this when all docs were on petsc website - but its not 
> clear how to do this properly with readthedocs
> 
> Satish
> 
> On Fri, 5 Mar 2021, Barry Smith wrote:
> 
>> 
>>  What is the plan to transition to the new documentation and webpages?
>> 
>>  I go to https://www.mcs.anl.gov/petsc/index.html 
>>  and mostly see the old stuff etc. 
>> Our next release is coming up soon and it would be nice to have transitioned 
>> out of the old material and to the new material by/at the new release.
>> 
>>  What do we need to do to make this happen?
>> 
>>   Thanks
>> 
>>   Barry
>> 
>> 
> 



Re: [petsc-dev] Commit squashing in MR

2021-03-04 Thread Patrick Sanan
I have also been enjoying using lazygit (thanks, Lisandro, for the tip!).  It's 
a similar sort of thing but runs in the terminal.
 I find it very useful for those things where the command line git tool falls 
down (staging parts of files, browsing large sets of changes), and I like that 
I don't have to bother with X windows to use something like gitk on my remote 
machine.

https://github.com/jesseduffield/lazygit

The only wrinkles I ran into using this are that is seems to assume you have a 
somewhat-recent "git" executable for some of the fancier
features (like merging or rearranging commits without using git rebase -i).

> Am 03.03.2021 um 21:02 schrieb Jacob Faibussowitsch :
> 
>> 'gitk' is easier to read [for me] than 'git log --graph'
> 
> Where was this my entire life… best kept git secret!
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
> 
>> On Mar 3, 2021, at 13:55, Satish Balay > > wrote:
>> 
>> 'gitk' is easier to read [for me] than 'git log --graph'
>> 
>> Satish
>> 
>> On Wed, 3 Mar 2021, Jacob Faibussowitsch wrote:
>> 
 git: 'graph' is not a git command. See 'git --help'.
>>> 
>>> I have it as an alias:
>>> 
>>> graph = !git log --graph --pretty=format:'%Cred%h%Creset 
>>> -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' 
>>> --abbrev-commit --date=relative
>>> 
>>> Best regards,
>>> 
>>> Jacob Faibussowitsch
>>> (Jacob Fai - booss - oh - vitch)
>>> Cell: (312) 694-3391
>>> 
 On Mar 3, 2021, at 13:50, Mark Adams >>> > wrote:
 
 
 
 On Tue, Mar 2, 2021 at 10:02 PM Junchao Zhang >>>  >> wrote:
 I am a naive git user, so I use interactive git rebase.  Suppose I am on 
 the branch I want to modify, 
 
 1) Use git graph to locate an upstream commit to be used as the base
 $ git graph
 
 Humm 
 
 14:49 adams/cusparse-lu-landau= /gpfs/alpine/csc314/scratch/adams/petsc$ 
 git --version
 git version 2.20.1
 14:49 adams/cusparse-lu-landau= /gpfs/alpine/csc314/scratch/adams/petsc$ 
 git graph
 git: 'graph' is not a git command. See 'git --help'.
 
 The most similar commands are
 branch
 grep
 
>>> 
>>> 
>> 
> 



Re: [petsc-dev] Commit squashing in MR

2021-03-02 Thread Patrick Sanan
The whole section on git in the dev manual needs some attention. (It was moved 
there in the consolidation of docs we had scattered in various places, but 
hasn't been expertly updated yet). Ideal, I think, would be to find some good, 
external instructions and link to them, under the idea that we should only 
maintain things in our own docs that aren't adequately documented somewhere 
else. This might not be possible (since we had to create these instructions in 
the first place).

There is a section on squashing but it's currently a bit buried, and the advice 
in this thread is probably more useful/current
https://docs.petsc.org/en/main/developers/integration/#squashing-excessive-commits
 


If anyone wants to go in there and quickly update those docs, remember that you 
can do so all from web interfaces! This workflow still has some wrinkles, but 
for small changes I still think it's appealing:

- go to the docs page you want to edit on docs.petsc.org
- select the version you want (usually "main") in the black ReadTheDocs box in 
the lower right
- click "edit" in "on GitLab" and make your MR (name the branch with "docs-" to 
maybe get it to auto-build on ReadTheDocs, label with docs and docs-only)
- if you get feedback on your MR and need to update, or notice a typo, I 
*think* this will work:
   - click on the last commit of your new branch
   - find the offending file
   - click on "edit at @deadbeef123"
- change the branch *back* to your branch in the pulldown
- click "edit"
- back in your MR, edit to "squash commits"

You can get a partial preview with the usual "preview" button, though not 
everything is interpreted correctly (but for things like links, it works fine).

If you want a full preview, you can

1. Build the Sphinx docs locally from your branch, either with
- "make sphinx-docs-all LOC=$PETSC_DIR"  (you may need to add 
PYTHON=python3, since this relies on Python 3.3+ for venv) 
- install the required Python packages yourself (e.g. pip install -r 
src/docs/sphinx_docs/requirements.txt), go to src/docs/sphinx_docs, run "make 
html", and look in _build/html

2. Build the Sphinx docs for your branch as a version on ReadTheDocs. There is 
currently an automation rule there that if your branch name has "docs-" in it, 
it should build (though I must admit I'm still not completely sure I understand 
exactly when RTD updates its information from GitLab). Or, if you have access, 
you can activate a new version yourself.



> Am 03.03.2021 um 05:32 schrieb Jed Brown :
> 
> Satish Balay via petsc-dev  > writes:
> 
>> On Wed, 3 Mar 2021, Blaise A Bourdin wrote:
>> 
>>> Hi,
>>> 
>>> This is not technically a petsc question. 
>>> It would be great to have a short section in the PETSc integration workflow 
>>> document explaining how to squash commits in a MR for git-impaired 
>>> developers like me.
>>> 
>>> Anybody wants to pitch in, or explain me how to do this?
>> 
>> To squash commits - I use the 'squash' action in 'git rebase -i HASH' and 
>> figure out the HASH to use from 'gitk main..branch'
>> 
>> [as git rebase requires the commit prior to the first commit of interest]
>> 
>> git provides many ways of modifying the branch (and the rebase topic is very 
>> generic) so I think its best to rely on proper git docs/tutorials
>> [and its not really specific to petsc workflow]
> 
> You can do it in one line, without changing the base:
> 
>  git rebase -i $(git merge-base main HEAD)
> 
> 
> An alternative is
> 
>  git rebase -i main
> 
> which gives you interactive rebase to replay on top of current 'main'. This 
> does two things at once and changing the base for your branch is not always 
> desirable.



Re: [petsc-dev] headsup: switch git default branch from 'master' to 'main'

2021-02-26 Thread Patrick Sanan
The answers to these were probably stated already, but the reminder might be 
useful to others, as well.

What will happen to "master" after today? Will it be deleted immediately or at 
some planned time? If not immediately deleted, will it be updated to match main?

> Am 23.02.2021 um 18:19 schrieb Satish Balay via petsc-dev 
> :
> 
> All,
> 
> This is a heads-up, we are to switch the default branch in petsc git
> repo from 'master' to 'main'
> 
> [Will plan to do the switch on friday the 26th]
> 
> We've previously switched 'maint' branch to 'release' before 3.14
> release - and this change (to 'main') is the next step in this direction.
> 
> Satish
> 



Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-19 Thread Patrick Sanan
We ended up doing it just as you say - copy the data to the host, use it to 
build the IS, and build the scatter. Would be fun to optimize further, maybe, 
but as you say that might be premature since there's ongoing work. Happy to get 
to play with it a bit, though!

Aside: it's been on the list of good things to do, docs-wise, to be able to 
label parts of the API as more or less stable, so I'm hoping we'll get to that 
(though I think it makes sense to wait until we've finished some of the current 
migrations tasks).

> Am 19.02.2021 um 16:18 schrieb Jed Brown :
> 
> ISCUDA isn't even right (perhaps ISGENERALCUDA, ISBLOCKCUDA). I agree that 
> this isn't a priority, but I could see it being needed in the next few years 
> to avoid bottlenecks in adaptive mesh refinement or other adaptive 
> algorithms. It's not a small amount of work, but I think all the index 
> coordination can be done efficiently on a GPU.
> 
> Junchao Zhang  writes:
> 
>> Even ISCUDA is simple to add, the PetscSFSetUp algorithm and many functions
>> involved are done on host (and are not simple to be parallelized on GPU)
>> The indices passed to VecScatter are analyzed and re-grouped. Even they are
>> copied to device eventually, they are likely not in their original form.
>> So, copying the indices from device to host and build a VecScatter there
>> seems the easiest approach.
>> 
>> The Kokkos-related functions are experimental. We need to decide whether
>> they are good or not.
>> 
>> --Junchao Zhang
>> 
>> 
>> On Fri, Feb 19, 2021 at 4:32 AM Patrick Sanan 
>> wrote:
>> 
>>> Thanks! That helps a lot.
>>> 
>>> I assume "no," but is ISCUDA simple to add?
>>> 
>>> More on what I'm trying to do, in case I'm missing an obvious approach:
>>> 
>>> I'm working on a demo code that uses an external library, based on Kokkos,
>>> as a solver - I create a Vec of type KOKKOS and populate it with the
>>> solution data from the library, by getting access to the raw Kokkos view
>>> with VecKokkosGetDeviceView() * .
>>> 
>>> I then want to reorder that solution data into PETSc-native ordering (for
>>> a velocity-pressure DMStag), so I create a pair of ISs and a VecScatter to
>>> do that.
>>> 
>>> The issue is that to create this scatter, I need to use information
>>> (essentially, an element-to-index map) from the external library's
>>> mesh-management object, which lives on the device. This doesn't work (when
>>> host != device), because of course the ISs live on the host and to create
>>> them I need to provide host arrays of indices.
>>> 
>>> Am I stuck, for now, with sending the index information information from
>>> the device to the host, using it to create the IS, and then having
>>> essentially the same information go back to the device when I use the
>>> scatter?
>>> 
>>> * As an aside, it looks like some of these Kokkos-related functions and
>>> types are missing man pages - if you have time to add them, even as stubs,
>>> that'd be great (if not let me know and I'll just try to formally do it, so
>>> that at least the existence of the functions in the API is reflected on the
>>> website).
>>> 
>>> Am 18.02.2021 um 23:17 schrieb Junchao Zhang :
>>> 
>>> 
>>> On Thu, Feb 18, 2021 at 4:04 PM Fande Kong  wrote:
>>> 
>>>> 
>>>> 
>>>> On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang 
>>>> wrote:
>>>> 
>>>>> VecScatter (i.e., SF, the two are the same thing) setup (building
>>>>> various index lists, rank lists) is done on the CPU.  is1, is2 must be 
>>>>> host
>>>>> data.
>>>>> 
>>>> 
>>>> Just out of curiosity, is1 and is2 can not be created on a GPU device in
>>>> the first place? That being said, it is technically impossible? Or we just
>>>> did not implement them yet?
>>>> 
>>> Simply because we do not have an ISCUDA class.
>>> 
>>> 
>>>> 
>>>> Fande,
>>>> 
>>>> 
>>>>> When the SF is used to communicate device data, indices are copied to
>>>>> the device..
>>>>> 
>>>>> --Junchao Zhang
>>>>> 
>>>>> 
>>>>> On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan 
>>>>> wrote:
>>>>> 
>>>>>> I'm trying to understand how VecScatters work with GPU-native Kokkos
>>>>>> Vecs.
>>>>>> 
>>>>>> Specifically, I'm interested in what will happen in code like in
>>>>>> src/vec/vec/tests/ex22.c,
>>>>>> 
>>>>>> ierr = VecScatterCreate(x,is1,y,is2,);CHKERRQ(ierr);
>>>>>> 
>>>>>> (from
>>>>>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44
>>>>>> )
>>>>>> 
>>>>>> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the
>>>>>> command line. But is1 and is2 are (I think), always
>>>>>> CPU/host data. Assuming that the scatter itself can happen on the GPU,
>>>>>> the indices must make it to the device somehow - are they copied there 
>>>>>> when
>>>>>> the scatter is created? Is there a way to create the scatter using 
>>>>>> indices
>>>>>> already on the GPU (Maybe using SF more directly)?
>>>>>> 
>>>>>> 
>>> 



Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-19 Thread Patrick Sanan
True that it's not a huge efficiency concern, as this only affects the setup 
stage. This is more wondering if there's a simpler way to do the setup (sounds 
like there isn't at the moment).

> Am 19.02.2021 um 11:41 schrieb Stefano Zampini :
> 
> I dont understand the issue. I assume your vecscatter is  not a throw away 
> object but you will reuse it many times. Once the setup is done, the  indices 
> used internally by the implementation should be on gpu , or not?
> 
> Il Ven 19 Feb 2021, 13:33 Patrick Sanan  <mailto:patrick.sa...@gmail.com>> ha scritto:
> Thanks! That helps a lot. 
> 
> I assume "no," but is ISCUDA simple to add?
> 
> More on what I'm trying to do, in case I'm missing an obvious approach:
> 
> I'm working on a demo code that uses an external library, based on Kokkos, as 
> a solver - I create a Vec of type KOKKOS and populate it with the solution 
> data from the library, by getting access to the raw Kokkos view with 
> VecKokkosGetDeviceView() * .
> 
> I then want to reorder that solution data into PETSc-native ordering (for a 
> velocity-pressure DMStag), so I create a pair of ISs and a VecScatter to do 
> that.
> 
> The issue is that to create this scatter, I need to use information 
> (essentially, an element-to-index map) from the external library's 
> mesh-management object, which lives on the device. This doesn't work (when 
> host != device), because of course the ISs live on the host and to create 
> them I need to provide host arrays of indices.
> 
> Am I stuck, for now, with sending the index information information from the 
> device to the host, using it to create the IS, and then having essentially 
> the same information go back to the device when I use the scatter?
> 
> * As an aside, it looks like some of these Kokkos-related functions and types 
> are missing man pages - if you have time to add them, even as stubs, that'd 
> be great (if not let me know and I'll just try to formally do it, so that at 
> least the existence of the functions in the API is reflected on the website).
> 
>> Am 18.02.2021 um 23:17 schrieb Junchao Zhang > <mailto:junchao.zh...@gmail.com>>:
>> 
>> 
>> On Thu, Feb 18, 2021 at 4:04 PM Fande Kong > <mailto:fdkong...@gmail.com>> wrote:
>> 
>> 
>> On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang > <mailto:junchao.zh...@gmail.com>> wrote:
>> VecScatter (i.e., SF, the two are the same thing) setup (building various 
>> index lists, rank lists) is done on the CPU.  is1, is2 must be host data. 
>> 
>> Just out of curiosity, is1 and is2 can not be created on a GPU device in the 
>> first place? That being said, it is technically impossible? Or we just did 
>> not implement them yet?
>> Simply because we do not have an ISCUDA class.
>>  
>> 
>> Fande,
>>  
>> When the SF is used to communicate device data, indices are copied to the 
>> device..
>> 
>> --Junchao Zhang
>> 
>> 
>> On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan > <mailto:patrick.sa...@gmail.com>> wrote:
>> I'm trying to understand how VecScatters work with GPU-native Kokkos Vecs. 
>> 
>> Specifically, I'm interested in what will happen in code like in 
>> src/vec/vec/tests/ex22.c, 
>> 
>>  ierr = VecScatterCreate(x,is1,y,is2,);CHKERRQ(ierr);
>> 
>> (from 
>> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44 
>> <https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44>)
>> 
>> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the 
>> command line. But is1 and is2 are (I think), always
>> CPU/host data. Assuming that the scatter itself can happen on the GPU, the 
>> indices must make it to the device somehow - are they copied there when the 
>> scatter is created? Is there a way to create the scatter using indices 
>> already on the GPU (Maybe using SF more directly)?
>> 
> 



Re: [petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-19 Thread Patrick Sanan
Thanks! That helps a lot. 

I assume "no," but is ISCUDA simple to add?

More on what I'm trying to do, in case I'm missing an obvious approach:

I'm working on a demo code that uses an external library, based on Kokkos, as a 
solver - I create a Vec of type KOKKOS and populate it with the solution data 
from the library, by getting access to the raw Kokkos view with 
VecKokkosGetDeviceView() * .

I then want to reorder that solution data into PETSc-native ordering (for a 
velocity-pressure DMStag), so I create a pair of ISs and a VecScatter to do 
that.

The issue is that to create this scatter, I need to use information 
(essentially, an element-to-index map) from the external library's 
mesh-management object, which lives on the device. This doesn't work (when host 
!= device), because of course the ISs live on the host and to create them I 
need to provide host arrays of indices.

Am I stuck, for now, with sending the index information information from the 
device to the host, using it to create the IS, and then having essentially the 
same information go back to the device when I use the scatter?

* As an aside, it looks like some of these Kokkos-related functions and types 
are missing man pages - if you have time to add them, even as stubs, that'd be 
great (if not let me know and I'll just try to formally do it, so that at least 
the existence of the functions in the API is reflected on the website).

> Am 18.02.2021 um 23:17 schrieb Junchao Zhang :
> 
> 
> On Thu, Feb 18, 2021 at 4:04 PM Fande Kong  <mailto:fdkong...@gmail.com>> wrote:
> 
> 
> On Thu, Feb 18, 2021 at 1:55 PM Junchao Zhang  <mailto:junchao.zh...@gmail.com>> wrote:
> VecScatter (i.e., SF, the two are the same thing) setup (building various 
> index lists, rank lists) is done on the CPU.  is1, is2 must be host data. 
> 
> Just out of curiosity, is1 and is2 can not be created on a GPU device in the 
> first place? That being said, it is technically impossible? Or we just did 
> not implement them yet?
> Simply because we do not have an ISCUDA class.
>  
> 
> Fande,
>  
> When the SF is used to communicate device data, indices are copied to the 
> device..
> 
> --Junchao Zhang
> 
> 
> On Thu, Feb 18, 2021 at 11:50 AM Patrick Sanan  <mailto:patrick.sa...@gmail.com>> wrote:
> I'm trying to understand how VecScatters work with GPU-native Kokkos Vecs. 
> 
> Specifically, I'm interested in what will happen in code like in 
> src/vec/vec/tests/ex22.c, 
> 
>   ierr = VecScatterCreate(x,is1,y,is2,);CHKERRQ(ierr);
> 
> (from 
> https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44 
> <https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44>)
> 
> Here, x and y can be set to type KOKKOS using -vec_type kokkos at the command 
> line. But is1 and is2 are (I think), always
> CPU/host data. Assuming that the scatter itself can happen on the GPU, the 
> indices must make it to the device somehow - are they copied there when the 
> scatter is created? Is there a way to create the scatter using indices 
> already on the GPU (Maybe using SF more directly)?
> 



[petsc-dev] Understanding Vecscatter with Kokkos Vecs

2021-02-18 Thread Patrick Sanan
I'm trying to understand how VecScatters work with GPU-native Kokkos Vecs. 

Specifically, I'm interested in what will happen in code like in 
src/vec/vec/tests/ex22.c, 

ierr = VecScatterCreate(x,is1,y,is2,);CHKERRQ(ierr);

(from https://gitlab.com/petsc/petsc/-/blob/master/src/vec/vec/tests/ex22.c#L44 
)

Here, x and y can be set to type KOKKOS using -vec_type kokkos at the command 
line. But is1 and is2 are (I think), always
CPU/host data. Assuming that the scatter itself can happen on the GPU, the 
indices must make it to the device somehow - are they copied there when the 
scatter is created? Is there a way to create the scatter using indices already 
on the GPU (Maybe using SF more directly)?



Re: [petsc-dev] error with flags PETSc uses for determining AVX

2021-02-14 Thread Patrick Sanan


> Am 14.02.2021 um 07:22 schrieb Barry Smith :
> 
> 
> 
>> On Feb 13, 2021, at 11:58 PM, Jed Brown  wrote:
>> 
>> I usually configure --with-debugging=0 COPTFLAGS='-O2 -march=native' or 
>> similar. There's a tension here between optimizing aggressively for the 
>> current machine and making binaries that work on other machines. Most 
>> configure systems default to making somewhat portable binaries, so that's a 
>> principal of least surprise. (Though you're no novice and seem to have been 
>> surprised anyway.)
>> 
>> I'd kinda prefer if we recommended making portable binaries that run-time 
>> detected when to use newer instructions where it matters.
> 
>   How do we do this? What can we put in configure to do this.
> 
>   Yes, I never paid attention to the AVX nonsense over the years and never 
> realized that Intel and Gnu (and hence PETSc)  both compile by default for 
> machines I used in my twenties.
> 
>   Expecting PETSc users to automatically add -march= is not realistic.  I 
> will try to rig something up in configure where if the user does not provide 
> march something reasonable is selected. 
A softer (yet trivial to implement) option might also be to just alert the user 
that these flags exist in the usual message about using default optimization 
flags. Something like this would encourage users to do what Jed is doing:

  * WARNING: Using default optimization C flags -g -O3
You might consider manually setting optimal optimization flags for your system 
with
COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples. 
In particular, you may want to supply specific flags (e.g. -march=native) 
to take advantage of higher-performance instructions.


None of the examples in config/examples actually use -march=native, and this is 
a very common thing to do that, as you point out, isn't obvious until you know 
you have to do it, so it seems to be worth the screen space.





> 
>  Barry
> 
> 
>> 
>> Barry Smith  writes:
>> 
>>> Shouldn't configure be setting something appropriate for this 
>>> automatically? This is nuts, it means when users do a ./configure make 
>>> unless they pass weird arguments they sure as heck don't know about to the 
>>> compiler they won't get any of the glory that they expect and that has been 
>>> in almost all Intel systems forever.
>>> 
>>> Barry
>>> 
>>> I run ./configure --with-debugging=0 and I get none of the stuff added by 
>>> Intel for 15+ years?
>>> 
>>> 
 On Feb 13, 2021, at 11:26 PM, Jed Brown  wrote:
 
 Use -march=native or similar. The default target is basic x86_64, which 
 has only SSE2.
 
 Barry Smith  writes:
 
> PETSc source has code like defined(__AVX2__) in the source but it does 
> not seem to be able to find any of these macros (icc or gcc) on the 
> petsc-02 system
> 
> Are these macros supposed to be defined? How does on get them to be 
> defined? Why are they not define? What am I doing wrong?
> 
> Keep reading
> 
> $ lscpu 
> Architecture:x86_64
> CPU op-mode(s):  32-bit, 64-bit
> Byte Order:  Little Endian
> CPU(s):  64
> On-line CPU(s) list: 0-63
> Thread(s) per core:  2
> Core(s) per socket:  16
> Socket(s):   2
> NUMA node(s):2
> Vendor ID:   GenuineIntel
> CPU family:  6
> Model:   85
> Model name:  Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
> Stepping:7
> CPU MHz: 1000.603
> CPU max MHz: 2301.
> CPU min MHz: 1000.
> BogoMIPS:4600.00
> Virtualization:  VT-x
> L1d cache:   32K
> L1i cache:   32K
> L2 cache:1024K
> L3 cache:22528K
> NUMA node0 CPU(s):   0-15,32-47
> NUMA node1 CPU(s):   16-31,48-63
> Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe 
> syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts 
> rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 
> monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca 
> sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c 
> rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 
> invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced 
> tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep 
> bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap 
> clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec 
> xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm 
> ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d 
> arch_capabilities
> 
> Test program 
> 
> #if defined(__FMA__)
> #error FMA

Re: [petsc-dev] behavior of PetscDefined(PETSC_HAVE_CUDA)

2020-10-16 Thread Patrick Sanan
The man page for this seems to need fixing:

"Either way evaluates true if PETSC_USE_DEBUG is defined (merely defined or 
defined to 1) or undefined. This macro should not be used if its argument may 
be defined to a non-empty value other than 1."

Is a 0 value allowed? If so, does it also evaluate to false?


The source check seems easy enough - reject anything that matches the following?

PetscDefined(\W*PETSC_

> Am 16.10.2020 um 14:59 schrieb Mark Adams :
> 
> 
> 
> On Fri, Oct 16, 2020 at 1:05 AM Jed Brown  wrote:
> Barry Smith  writes:
> 
> >   Could it error if the input starts with PETS_ to prevent inadvertent 
> > mistakes hanging around for years?
> 
> Source checks could warn, but cpp macros can't do substring matching.
> 
> You would want to check this before CI, so live with it. 



Re: [petsc-dev] pull request tutorial October 15 at 2pm central

2020-10-15 Thread Patrick Sanan
This page is the home for documentation on these processes (and no longer the 
GitLab wiki):
https://docs.petsc.org/en/latest/developers/integration/

> Am 14.10.2020 um 23:52 schrieb Munson, Todd via petsc-dev 
> :
> 
>  
> Hi all,
>  
> I forgot to send a reminder earlier.  Our next tutorial will be October 15 at 
> 2pm central.  Barry will be leading a tutorial on pull requests from 
> preparing the request to what happens during the review and after the pull 
> request is accepted.  Please email Barry directly if you have questions that 
> you would like him to address in the tutorial.  The information for the call 
> is below.
>  
> Thanks, Todd.
>  
>  
> To join the meeting on a computer or mobile phone: 
> https://bluejeans.com/753281003?src=calendarLink 
> 
> 
> Phone Dial-in
> +1.312.216.0325 (US (Chicago))
> +1.408.740.7256 (US (San Jose))
> +1.866.226.4650 (US Toll Free)
> Global Numbers: https://www.bluejeans.com/premium-numbers 
> 
> 
> Meeting ID: 753 281 003
> 
> Room System
> 199.48.152.152 or bjn.vc
> 
> Meeting ID: 753 281 003
> 
> Want to test your video connection?
> https://bluejeans.com/111 


Re: [petsc-dev] Makefile.user and CUDA

2020-10-14 Thread Patrick Sanan
Here's a hack that seems to do what I want, but I don't think it's 
library-quality, as it extracts information from the "cuda" package in what 
seems like a brittle way:

https://gitlab.com/petsc/petsc/-/merge_requests/3345

> Am 13.10.2020 um 03:00 schrieb Jed Brown :
> 
> Matthew Knepley mailto:knep...@gmail.com>> writes:
> 
>> On Mon, Oct 12, 2020 at 3:03 PM Patrick Sanan 
>> wrote:
>> 
>>> 
>>> 
>>> Am 12.10.2020 um 20:11 schrieb Matthew Knepley :
>>> 
>>> On Mon, Oct 12, 2020 at 3:47 AM Patrick Sanan 
>>> wrote:
>>> 
>>>> I have a toy application code built on PETSc which needs to compile and
>>>> link a .cu file.
>>>> 
>>>> I'd love to be able to configure PETSc (with CUDA), and then use a
>>>> modified version of share/petsc/Makefile.user to compile and link my code,
>>>> using a consistent set of compilers, libraries, and flags.  Makefile.user
>>>> uses petsc.pc (via pkg-config) and implicit GNU make rules to do almost
>>>> everything for you for C, C++, and Fortran.
>>>> 
>>>> However, I don't think it currently supports CUDA, and I'm not familiar
>>>> enough with BuildSystem or pkg-config to quickly add support myself, so I
>>>> resort to the "old" way, including things like this in my Makefile:
>>>> 
>>> 
>>> Okay, here is where the pkgconf file gets generated, and in fact where the
>>> language sections are
>>> 
>>> 
>>> https://gitlab.com/petsc/petsc/-/blob/master/config/PETSc/Configure.py#L161
>>> 
>>> I think you can just put a "CUDA" section in. I do not know what names to
>>> use, so I have not done it.
>>> 
>>> 
>>> That's where I got stuck as well :D
>>> 
>>> I can follow the pattern to get the compiler (CUDAC) and some of the
>>> flags, but that doesn't seem to have all the information that CUDAC,
>>> CUDAC_FLAGS, CUDA_INCLUDE, and CUDA_LIB provide in petscvariables.
>>> 
>> 
>> I can dig around and find all these for you.
>> 
>> 
>>> Naively I would assume I'd need to dig around to figure out how those
>>> variables are populated, and get the same info into petsc.pc
>>> 
>>> But, I worry that I'm misunderstanding fundamentals about pkg-config and
>>> how it is supposed to work. Am I free to put whatever fields I want in
>>> there, or is there some authority on what's "standard"?
>> 
>> I do not understand anything about pkg-config (and think it is a
>> fundamentally misguided mechanism). Jed, how should CUDA work with this?
> 
> I have no experience, but here are some examples.  I think the caller is 
> responsible for using nvcc if you have *.cu source.
> 
> $ cat /usr/lib/pkgconfig/cublas.pc 
> cudaroot=/opt/cuda
> libdir=${cudaroot}/targets/x86_64-linux/lib
> includedir=${cudaroot}/targets/x86_64-linux/include
> 
> Name: cublas
> Description: CUDA BLAS Library
> Version: 11.0
> Libs: -L${libdir} -lcublas
> Cflags: -I${includedir}
> 
> 
> $ cat /usr/lib/pkgconfig/nvrtc.pc 
> cudaroot=/opt/cuda
> libdir=${cudaroot}/targets/x86_64-linux/lib
> includedir=${cudaroot}/targets/x86_64-linux/include
> 
> Name: nvrtc
> Description: A runtime compilation library for CUDA C++
> Version: 11.0
> Libs: -L${libdir} -lnvrtc
> Cflags: -I${includedir}



Re: [petsc-dev] [petsc-users] About MAT_NEW_NONZERO_LOCATION[]

2020-10-13 Thread Patrick Sanan
So we can just take the advice out of the manual about setting this, since it's 
the default behavior?
https://gitlab.com/petsc/petsc/-/merge_requests/3344


> Am 13.10.2020 um 16:41 schrieb Barry Smith :
> 
> 
>   You only need to provide one of the options. 
> 
>The docs are slightly misleading.The flags only tells the matrix what to 
> do with new nonzero locations, preventing new ones. The Mat actually tracks 
> if new non-zeros locations are actually entered independent of the flags. So, 
> for example even if you did not supply any new flags AND your code did not 
> insert new locations then the structure would be reused.
> 
>Barry
> 
>> On Oct 13, 2020, at 7:47 AM, Thibaut Appel > > wrote:
>> 
>> Hi there, just a quick question:
>> 
>> It seems MAT_NEW_NONZERO_LOCATION_ERR set to PETSC_TRUE has kind of the same 
>> purpose as MAT_NEW_NONZERO_LOCATIONS set to PETSC_FALSE, the difference 
>> being if an additional entry is there, the former produces an error whereas 
>> in the latter it is simply ignored.
>> 
>> However the manual states:
>> 
>> 'If one wishes to repeatedly assemble matrices that retain the same nonzero 
>> pattern (such as within a nonlinear or time-dependent problem), the option 
>> MatSetOption(MatA,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE); should be 
>> specified after the first matrix has been fully assembled. This option 
>> ensures that certain data structures and communication information will be 
>> reused (instead of regenerated) during successive steps, thereby increasing 
>> efficiency'
>> 
>> If I only declare:
>> 
>> CALL MatSetOption(MatA,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE,ierr)
>> 
>> Would the data structures still be reused in later matrix assemblies?
>> 
>> Or does it rather make sense to use conjointly:
>> 
>> CALL MatSetOption(MatA,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE,ierr)
>> CALL MatSetOption(MatA,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ierr)
>> 
>> Thank you,
>> 
>> 
>> 
>> Thibaut
>> 
> 



Re: [petsc-dev] Makefile.user and CUDA

2020-10-12 Thread Patrick Sanan


> Am 12.10.2020 um 20:11 schrieb Matthew Knepley :
> 
> On Mon, Oct 12, 2020 at 3:47 AM Patrick Sanan  <mailto:patrick.sa...@gmail.com>> wrote:
> I have a toy application code built on PETSc which needs to compile and link 
> a .cu file.
> 
> I'd love to be able to configure PETSc (with CUDA), and then use a modified 
> version of share/petsc/Makefile.user to compile and link my code, using a 
> consistent set of compilers, libraries, and flags.  Makefile.user uses 
> petsc.pc (via pkg-config) and implicit GNU make rules to do almost everything 
> for you for C, C++, and Fortran.
> 
> However, I don't think it currently supports CUDA, and I'm not familiar 
> enough with BuildSystem or pkg-config to quickly add support myself, so I 
> resort to the "old" way, including things like this in my Makefile:
> 
> Okay, here is where the pkgconf file gets generated, and in fact where the 
> language sections are
> 
>   https://gitlab.com/petsc/petsc/-/blob/master/config/PETSc/Configure.py#L161 
> <https://gitlab.com/petsc/petsc/-/blob/master/config/PETSc/Configure.py#L161>
> 
> I think you can just put a "CUDA" section in. I do not know what names to 
> use, so I have not done it.

That's where I got stuck as well :D 

I can follow the pattern to get the compiler (CUDAC) and some of the flags, but 
that doesn't seem to have all the information that CUDAC, CUDAC_FLAGS, 
CUDA_INCLUDE, and CUDA_LIB provide in petscvariables.

Naively I would assume I'd need to dig around to figure out how those variables 
are populated, and get the same info into petsc.pc

But, I worry that I'm misunderstanding fundamentals about pkg-config and how it 
is supposed to work. Am I free to put whatever fields I want in there, or is 
there some authority on what's "standard"?
>   Thanks,
> 
>  Matt
>  
> include ${PETSC_DIR}/${PETSC_ARCH}/lib/petsc/conf/petscvariables
> 
> %.o : %.cu
> $(CUDAC) -c $(CUDAC_FLAGS) $(CUDA_INCLUDE) -o $@ $<
> 
> app : ${OBJ}
> $(LINK.cc <http://link.cc/>) -o $@ $^ $(LDLIBS) $(CUDA_LIB)
> 
> 
> Is it possible / easy / advisable to add CUDA support to petsc.pc ?
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>



[petsc-dev] Makefile.user and CUDA

2020-10-12 Thread Patrick Sanan
I have a toy application code built on PETSc which needs to compile and link a 
.cu file.

I'd love to be able to configure PETSc (with CUDA), and then use a modified 
version of share/petsc/Makefile.user to compile and link my code, using a 
consistent set of compilers, libraries, and flags.  Makefile.user uses petsc.pc 
(via pkg-config) and implicit GNU make rules to do almost everything for you 
for C, C++, and Fortran.

However, I don't think it currently supports CUDA, and I'm not familiar enough 
with BuildSystem or pkg-config to quickly add support myself, so I resort to 
the "old" way, including things like this in my Makefile:

include ${PETSC_DIR}/${PETSC_ARCH}/lib/petsc/conf/petscvariables

%.o : %.cu
$(CUDAC) -c $(CUDAC_FLAGS) $(CUDA_INCLUDE) -o $@ $<

app : ${OBJ}
$(LINK.cc) -o $@ $^ $(LDLIBS) $(CUDA_LIB)


Is it possible / easy / advisable to add CUDA support to petsc.pc ?

Re: [petsc-dev] sphinx tutorial October 1 at 2pm central

2020-10-01 Thread Patrick Sanan
Hi all - thanks for listening!

Here are the slides (including pdf with lots of links): 
https://gitlab.com/psanan/petsc-sphinx-slides

> Am 01.10.2020 um 14:48 schrieb Munson, Todd via petsc-dev 
> :
> 
> Hi all,
>  
> A reminder that Patrick Sanan will be giving a tutorial on sphinx for the 
> petsc developers today, October 1 at 2pm central.  The information to join 
> the call is below.  This is timely as the petsc documentation is being moved 
> to use sphinx.  Hope to see those interested at the tutorial.
>  
> Thanks, Todd.
>  
> To join the meeting on a computer or mobile phone: 
> https://bluejeans.com/753281003?src=calendarLink 
> <https://bluejeans.com/753281003?src=calendarLink>
> 
> Phone Dial-in
> +1.312.216.0325 (US (Chicago))
> +1.408.740.7256 (US (San Jose))
> +1.866.226.4650 (US Toll Free)
> Global Numbers: https://www.bluejeans.com/premium-numbers 
> <https://www.bluejeans.com/premium-numbers>
> 
> Meeting ID: 753 281 003
> 
> Room System
> 199.48.152.152 or bjn.vc
> 
> Meeting ID: 753 281 003
> 
> Want to test your video connection?
> https://bluejeans.com/111 <https://bluejeans.com/111>


Re: [petsc-dev] reminder on managing your merge requests MR

2020-09-30 Thread Patrick Sanan

> Am 29.09.2020 um 08:16 schrieb Barry Smith :
> 
> 
> 
>> On Sep 29, 2020, at 12:43 AM, Patrick Sanan > <mailto:patrick.sa...@gmail.com>> wrote:
>> 
>> 
>> 
>>> Am 29.09.2020 um 02:12 schrieb Barry Smith >> <mailto:bsm...@petsc.dev>>:
>>> 
>>> 
>>>   This is a reminder for everyone submitting MR. 
>>> 
>>>   You are responsible to track the progress of the MR. Make sure you use the
>>> 
>>> label workflow:review when you think it is ready to be reviewed for 
>>> merge, add additional appropriate labels also
>>> assign some appropriate reviewers 
>>> 
>>> make sure it gets tested
>>> 
>>> when you resolve the reviewer concerns (called threads) make sure you 
>>> mark them as resolved
>>> 
>>> Once the tests are clean and the MR has been approved
>>>-  change the workflow label to workflow: ready for merge
>>>- assign Satish and no one else to the MR.
>>> 
>>>  By following this workflow less MR will get "lost"
>>> 
>>>  Thanks
>>> 
>>>  Barry
>>> 
>>>  With the new documentation approach in place we'll provide more detailed 
>>> information on submitting MR and even videos :-) soon.
>>> 
>>> 
>> For now, the guidelines are defined are on the wiki, e.g. 
>> https://gitlab.com/petsc/petsc/-/wikis/home#before-filing-a-merge-request 
>> <https://gitlab.com/petsc/petsc/-/wikis/home#before-filing-a-merge-request> 
>> Some of the other wiki pages are stale (discussing what to do with "next", 
>> etc.,)
>> 
>> The idea was to migrate this info to Sphinx as well. This would be less 
>> quick to edit, but more centralized and full-featured.
>> 
>> On the topic of docs edits, is it okay to label a docs-only edit (which 
>> could break only docs) as "ready to merge", and assign Satish, from the 
>> start? 
>> This would of course have to be used with extreme caution, but it's my hope 
>> that people would be able to notice and fix small typos and errors without 
>> losing the thread of what you're working on, and with low integration 
>> overhead (which is one thing the wiki does extremely well).
> 
> Yes, good idea. We could even have another person be the assigned person and 
> do the merges so Satish doesn't get even more work.  
> 
> I'd like to know an easy way to do the changes and MR from the web rather 
> than needing to drop into the command line for making the branches to MR. 
> Maybe that could be documented first? 
> 
>   Barry
> 
> 
Added a section here (which can be directly edited): 
https://gitlab.com/petsc/petsc/-/wikis/Home#docs-only-changes 
<https://gitlab.com/petsc/petsc/-/wikis/Home#docs-only-changes>
(this should still be moved to the Sphinx docs, so we can easily link to it 
from the other developers docs)

And opened a MR here, using the instructions detailed in the MR: 
https://gitlab.com/petsc/petsc/-/merge_requests/3282 
<https://gitlab.com/petsc/petsc/-/merge_requests/3282>




Re: [petsc-dev] reminder on managing your merge requests MR

2020-09-28 Thread Patrick Sanan


> Am 29.09.2020 um 02:12 schrieb Barry Smith :
> 
> 
>   This is a reminder for everyone submitting MR. 
> 
>   You are responsible to track the progress of the MR. Make sure you use the
> 
> label workflow:review when you think it is ready to be reviewed for 
> merge, add additional appropriate labels also
> assign some appropriate reviewers 
> 
> make sure it gets tested
> 
> when you resolve the reviewer concerns (called threads) make sure you 
> mark them as resolved
> 
> Once the tests are clean and the MR has been approved
>-  change the workflow label to workflow: ready for merge
>- assign Satish and no one else to the MR.
> 
>  By following this workflow less MR will get "lost"
> 
>  Thanks
> 
>  Barry
> 
>  With the new documentation approach in place we'll provide more detailed 
> information on submitting MR and even videos :-) soon.
> 
> 
For now, the guidelines are defined are on the wiki, e.g. 
https://gitlab.com/petsc/petsc/-/wikis/home#before-filing-a-merge-request 
 
Some of the other wiki pages are stale (discussing what to do with "next", 
etc.,)

The idea was to migrate this info to Sphinx as well. This would be less quick 
to edit, but more centralized and full-featured.

On the topic of docs edits, is it okay to label a docs-only edit (which could 
break only docs) as "ready to merge", and assign Satish, from the start? 
This would of course have to be used with extreme caution, but it's my hope 
that people would be able to notice and fix small typos and errors without 
losing the thread of what you're working on, and with low integration overhead 
(which is one thing the wiki does extremely well).

Re: [petsc-dev] petsc release plan for Sept/2020

2020-09-21 Thread Patrick Sanan
Hi Satish -

To follow up on the earlier discussion of what needs to be added the release 
checklist, as far as versioning:

The aim is to delete the old LaTeX users manual before the release, so things 
related to that obviously wouldn''t apply anymore. 

The TAO manual remains for now.

The custom front page of the (new) PDF manual has the version number 
hard-coded, so this needs to be updated manually in two places in

 src/docs/sphinx_docs/manual/anl_tech_report/first.inc


If I have time I'll see if we can't make that front page use the version number 
from the Sphinx docs. There is logic which is supposed to use the correct 
version number if it's a release version,
otherwise something verbose from git-describe (one of the open MRs makes that 
change to src/docs/sphinx_docs/conf.py). 



> Am 20.09.2020 um 18:43 schrieb Satish Balay via petsc-dev 
> :
> 
> A reminder.
> 
> thanks,
> Satish
> 
> On Wed, 2 Sep 2020, Satish Balay via petsc-dev wrote:
> 
>> All,
>> 
>> We are to make a petsc release by the end of September.
>> 
>> For this release [3.14], will work with the following dates:
>> 
>> - feature freeze: Sept 27 say 5PM EST
>> - release: Sept 29
>> 
>> Merges after freeze should contain only fixes that would normally be 
>> acceptable to maint workflow.
>> 
>> I've created a new milestone 'v3.14-release'. So if you are working on a MR 
>> with the goal of merging before release - its best to use this tag with the 
>> MR.
>> 
>> And it would be good to avoid merging large changes at the last minute. And 
>> not have merge requests stuck in need of reviews, testing and other 
>> necessary tasks.
>> 
>> And I would think the testing/CI resources would get stressed in this 
>> timeframe - so it would be good to use them judiciously if possible.
>> 
>> - if there are failures in stage-2 or 3 - and its no longer necessary to 
>> complete all the jobs - one can 'cancel' the pipeline.
>> - if a fix needs to be tested - one can first test with only the failed jobs 
>> (if this is known) - before doing a full test pipeline. i.e:
>>   - use the automatically started and paused 'merge-request' pipeline (or 
>> start new 'web' pipeline, and cancel it immediately)
>>   - now toggle only the jobs that need to be run
>>   - [on success of the selected jobs] if one wants to run the full pipeleine 
>> - click 'retry' - and the remaining canceled jobs should now get scheduled.
>> 
>> thanks,
>> Satish
>> 
> 



Re: [petsc-dev] Users manual update

2020-08-25 Thread Patrick Sanan


> Am 25.08.2020 um 17:24 schrieb Barry Smith :
> 
> 
> 
>> On Aug 25, 2020, at 9:26 AM, Patrick Sanan > <mailto:patrick.sa...@gmail.com>> wrote:
>> 
>> 
>> 
>>> Am 25.08.2020 um 15:59 schrieb Barry Smith >> <mailto:bsm...@petsc.dev>>:
>>> 
>>> 
>>>   Aren't we likely to keep using sowing to generate the manual pages and 
>>> then our own scripts to clean them up and add the links for source code 
>>> HTML etc? But use Sphinx for the users manual and all web pages we maintain 
>>> ourselves, FAQ etc? 
>>> 
>>>   So we will always have two "build" parts for documentation and need to 
>>> coordinate them?
>> Yes - what I'm hoping for is that we can come up with a process to make sure 
>> that the same event triggers updating of both systems. I'm not sure if this 
>> is already in the capabilities of the CI system on GitLab - can we, for 
>> example, trigger a "classic" docs build whenever master (or whichever 
>> branch) is updated?
> 
>   It is too slow now to put into the CI for every branch. Hopefully it will 
> eventually be fast enough. Currently it  hasmany bash scripts etc for each 
> file. It also builds everything from scratch, if we make it user dependencies 
> by hooking in a bit of Jed's gmakefile then it will super fast assuming we 
> can keep the previous build around (which is not normally done with the CI  
> but must be possible).
FWIW I think that's what ReadTheDocs does by default, which might make it 
feasible to build more things there.
> 
>   Barry
> 
> 
>>> 
>>>   Or do you have a plan to generate the manual pages with some other tool?  
>>> I don't think that is possible. BTW: I am updating the 
>>> generation/processing of the manual pages in 
>>> barry/2020-07-07/docs-no-makefiles  It should be much faster once several 
>>> rounds of refactorization are done and will not require all the SOURCEC 
>>> EXAMPLESC stuff in the makefiles anymore.
>>> 
>> I think the best idea so far is to (once or with each build) parse the 
>> sowing syntax into something Sphinx can then parse (that is, 
>> reStructuredText), but that seems like a lot of work to undertake when there 
>> is lower-hanging fruit in terms of making the docs more usable.
> 
>   
>> 
>> Another option which occurs to me now that you mention you're speeding up 
>> the man page build is that, if the full classic docs build were sufficiently 
>> fast, it could also be done automatically on the ReadTheDocs server (as a 
>> minimal build with docs is done now), and then perhaps we can just use the 
>> HTML that's generated on docs.petsc.org <http://docs.petsc.org/> .
>>>   Barry
>>> 
>>> 
>>>   
>>> 
>>>> On Aug 25, 2020, at 8:43 AM, Patrick Sanan >>> <mailto:patrick.sa...@gmail.com>> wrote:
>>>> 
>>>> This is still up for debate.
>>>> 
>>>> The main push right now is to try and move as many of the docs as possible 
>>>> (in particular, the users manual) to a web-friendlier format, using Sphinx 
>>>> and ReadTheDocs. Unlike the current "classic" docs at 
>>>> https://www.mcs.anl.gov/petsc/documentation/index.html 
>>>> <https://www.mcs.anl.gov/petsc/documentation/index.html>, this is used in 
>>>> a style very similar to CI - we have a .readthedocs.yml file, and 
>>>> docs.petsc.org <http://docs.petsc.org/> (linked to our ReadTheDocs 
>>>> account) updates itself whenever things are pushed to master (or other 
>>>> branches we specify).
>>>> 
>>>> What makes this a bit ugly at the moment is that a lot of the material, in 
>>>> particular the HTML source code and the man pages, is still built by the 
>>>> classic docs system (nightly). So, there are two subsets of documentation 
>>>> with two different build processes. This is obviously not what we want in 
>>>> the long run, but has the advantage of allowing us to make incremental 
>>>> progress (in my view, the only possible progress) on the docs.
>>>> 
>>>> Currently, the Sphinx build actually does a minimal build of PETSc, enough 
>>>> to obtain information to generate man pages links to the "classic" docs.
>>>> 
>>>>> Am 24.08.2020 um 19:22 schrieb Fande Kong >>>> <mailto:fdkong...@gmail.com>>:
>>>>> 
>>>>> Could we support "make docs" instead of &quo

Re: [petsc-dev] Users manual update

2020-08-25 Thread Patrick Sanan
This is still up for debate.

The main push right now is to try and move as many of the docs as possible (in 
particular, the users manual) to a web-friendlier format, using Sphinx and 
ReadTheDocs. Unlike the current "classic" docs at 
https://www.mcs.anl.gov/petsc/documentation/index.html 
, this is used in a 
style very similar to CI - we have a .readthedocs.yml file, and docs.petsc.org 
 (linked to our ReadTheDocs account) updates itself 
whenever things are pushed to master (or other branches we specify).

What makes this a bit ugly at the moment is that a lot of the material, in 
particular the HTML source code and the man pages, is still built by the 
classic docs system (nightly). So, there are two subsets of documentation with 
two different build processes. This is obviously not what we want in the long 
run, but has the advantage of allowing us to make incremental progress (in my 
view, the only possible progress) on the docs.

Currently, the Sphinx build actually does a minimal build of PETSc, enough to 
obtain information to generate man pages links to the "classic" docs.

> Am 24.08.2020 um 19:22 schrieb Fande Kong :
> 
> Could we support "make docs" instead of "xx-docs-xxx"?
> 
> Or were we planning to support multiple formats? 
> 
> Thanks,
> 
> Fande,
> 
> 
> On Fri, Aug 21, 2020 at 9:58 AM huabel via petsc-dev  > wrote:
>   success to make sphinx-docs-html. 
>   
> Thanks!
> 
> 
>> On Aug 21, 2020, at 11:40 PM, Satish Balay > > wrote:
>> 
>> The updates referred in this thread are in master branch [and not maint]
>> 
>> Satish
>> 
>> On Fri, 21 Aug 2020, huabel via petsc-dev wrote:
>> 
>>> 
>>> ➜  petsc git:(maint) make sphinx-docs-html
>>> /usr/local/opt/python@3.8/bin/python3.8 ./config/gmakegen.py 
>>> --petsc-arch=arch-darwin-c-debug
>>> /usr/local/opt/python@3.8/bin/python3.8 
>>> /Volumes/data3/fun2/demox/petsc/config/gmakegentest.py 
>>> --petsc-dir=/Volumes/data3/fun2/demox/petsc 
>>> --petsc-arch=arch-darwin-c-debug --testdir=./arch-darwin-c-debug/tests
>>> gmake[1]: *** No rule to make target 'sphinx-docs-html'.  Stop.
>>> gmake: *** [GNUmakefile:17: sphinx-docs-html] Error 2
>>> 
>>> 
>>> 
 On Aug 21, 2020, at 11:06 PM, Jed Brown >>> > wrote:
 
 huabel via petsc-dev >>> > writes:
 
> Thanks, I copied the ‘developer’ fold out and comment 
> (#extensions.append('sphinxcontrib.bibtex’) this line, with few copy it 
> works well, (no cite link), for me it enough. 
 
 We recommend using `make sphinx-docs-html` from the top-level.  It will 
 create/activate a virtualenv based on requirements.txt so everything will 
 work.  It's fast for incremental updates.
>>> 
>>> 
> 



Re: [petsc-dev] Users manual update

2020-08-21 Thread Patrick Sanan
Note that we have only tested things with Sphinx 2.4.4, because we rely on a 
custom extension to add the links to man pages. 

We recently added a helper target in the top-level makefile, which sets up a 
virtual environment for you with the same packages used for the ReadTheDocs 
build. Perhaps you can try that, or it can give a hint as to how to set up your 
own Python environment?

cd $PETSC_DIR
make sphinx-docs-clean
make sphinx-docs-html
open src/docs/sphinx_docs/_build/html/index.html



> Am 21.08.2020 um 10:39 schrieb huabel :
> 
> Hi 
>   I run make dirhtml get some error
> 
> ➜  sphinx_docs git:(maint) make dirhtml
> Running Sphinx v3.1.2
> 
> Found DOT install: /usr/local/bin/dot
> 
> 
> Extension error:
> Could not import extension sphinxcontrib.bibtex (exception: No module named 
> 'sphinxcontrib.bibtex')
> gmake: *** [Makefile:29: dirhtml] Error 2
> ➜  sphinx_docs git:(maint) pip3 show sphinxcontrib-bibtex
> Name: sphinxcontrib-bibtex
> Version: 1.0.0
> Summary: A Sphinx extension for BibTeX style citations.
> Home-page: https://github.com/mcmtroffaes/sphinxcontrib-bibtex 
> <https://github.com/mcmtroffaes/sphinxcontrib-bibtex>
> Author: Matthias C. M. Troffaes
> Author-email: matthias.troff...@gmail.com <mailto:matthias.troff...@gmail.com>
> License: BSD
> Location: /usr/local/lib/python3.8/site-packages
> Requires: oset, Sphinx, pybtex-docutils, pybtex
> Required-by: 
> ➜  sphinx_docs git:(maint) brew info graphviz
> graphviz: stable 2.44.1 (bottled), HEAD
> Graph visualization software from AT and Bell Labs
> https://www.graphviz.org/ <https://www.graphviz.org/>
> /usr/local/Cellar/graphviz/2.44.1 (506 files, 18MB) *
>   Poured from bottle on 2020-07-12 at 16:47:06
> From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/graphviz.rb 
> <https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/graphviz.rb>
> License: EPL-1.0
> ==> Dependencies
> Build: autoconf ✔, automake ✘, pkg-config ✔
> Required: gd ✔, gts ✔, libpng ✔, libtool ✔, pango ✘
> ==> Options
> --HEAD
>   Install HEAD version
> ==> Analytics
> install: 59,114 (30 days), 157,762 (90 days), 518,413 (365 days)
> install-on-request: 48,388 (30 days), 125,480 (90 days), 390,556 (365 days)
> build-error: 0 (30 days)
> 
> 
> 
>> On Aug 21, 2020, at 3:14 PM, Patrick Sanan > <mailto:patrick.sa...@gmail.com>> wrote:
>> 
>> Hi all - 
>> 
>> We're working on getting the users manual fully migrated to Sphinx, so we 
>> can delete the pure-LaTeX version.
>> 
>> So far, we've done most of the groundwork to set things up using Sphinx and 
>> ReadTheDocs, and to do some semi-automated conversion of the manual. Most 
>> people will access the manual via docs.petsc.org <http://docs.petsc.org/>, 
>> but Sphinx also allows you to build a PDF (via LaTeX) of the docs, which 
>> we'll maintain so as to have something citable.
>> 
>> Most sections of the manual have a big ugly warning asking for volunteers, 
>> e.g. here: https://docs.petsc.org/en/latest/manual/mat/ 
>> <https://docs.petsc.org/en/latest/manual/mat/> . An expert reading through 
>> any of the chapters will certainly find things they can fix or remove as 
>> they go through. That said, the main objective for now is simply to provide 
>> the existing content in a web-friendly way. The minimal task mostly involves 
>> fixing references and tables (use list-table whenever possible). 
>> 
>> Probably the best example so far of good formatting is the SNES chapter 
>> which Jed has been working on:
>> https://docs.petsc.org/en/latest/manual/snes/ 
>> <https://docs.petsc.org/en/latest/manual/snes/> 
>> 
>> Regarding Sphinx, tips and useful conventions are being noted here:
>> https://docs.petsc.org/en/latest/developers/documentation/#sphinx-documentation-guidelines
>>  
>> <https://docs.petsc.org/en/latest/developers/documentation/#sphinx-documentation-guidelines>
>> 
>> (Note that for small docs changes like this, you can go to the ReadTheDocs 
>> drop-down in the bottom right and click a link to directly edit on GitLab)
>> 
> 



[petsc-dev] Users manual update

2020-08-21 Thread Patrick Sanan
Hi all -

We're working on getting the users manual fully migrated to Sphinx, so we
can delete the pure-LaTeX version.

So far, we've done most of the groundwork to set things up using Sphinx and
ReadTheDocs, and to do some semi-automated conversion of the manual. Most
people will access the manual via docs.petsc.org, but Sphinx also allows
you to build a PDF (via LaTeX) of the docs, which we'll maintain so as to
have something citable.

Most sections of the manual have a big ugly warning asking for volunteers,
e.g. here: https://docs.petsc.org/en/latest/manual/mat/ . An expert reading
through any of the chapters will certainly find things they can fix or
remove as they go through. That said, the main objective for now is simply
to provide the existing content in a web-friendly way. The minimal task
mostly involves fixing references and tables (use list-table whenever
possible).

Probably the best example so far of good formatting is the SNES chapter
which Jed has been working on:
https://docs.petsc.org/en/latest/manual/snes/

Regarding Sphinx, tips and useful conventions are being noted here:
https://docs.petsc.org/en/latest/developers/documentation/#sphinx-documentation-guidelines

(Note that for small docs changes like this, you can go to the ReadTheDocs
drop-down in the bottom right and click a link to directly edit on GitLab)


Re: [petsc-dev] Sphinx error

2020-07-07 Thread Patrick Sanan
Sorry about the delay in seeing this thread - Barry, I assume that this means 
that despite the pain you have at least some sort of working Sphinx. I 
hesitated to add anything related to Sphinx to the existing docs build 
(controlled by the makefiles) yet, but as more things are ported there that 
would make sense. 


> Am 07.07.2020 um 14:18 schrieb Barry Smith :
> 
>   "Any Python package that you install with brew will install the Homebrew 
> Python build."
> 
>   Sure in general with python on Apple, but you ignored my statement:  
> 
> $ which sphinx-build
> /usr/local/opt/sphinx-doc/bin/sphinx-build
> ~/Src/petsc/src/ksp/ksp/interface (master=) arch-master
> $ more /usr/local/opt/sphinx-doc/bin/sphinx-build
> #!/usr/local/Cellar/sphinx-doc/3.0.4/libexec/bin/python3.8
> # -*- coding: utf-8 -*-
> import re
> import sys
> from sphinx.cmd.build import main
> 
> sphinx seems to be carrying around its own python internally, thus quite 
> rightly getting Jed upset about packaging on Apple of open source software.  
> Or perhaps I misunderstand that line.
> 
>   Barry
> 
> 
>> On Jul 7, 2020, at 2:40 AM, Lisandro Dalcin > > wrote:
>> 
>> 
>> 
>> On Tue, 7 Jul 2020 at 05:37, Barry Smith > > wrote:
>> 
>>   My fault, for some reason sphinx from brew installed its own private 
>> python so I had to do the pip at that.
>> 
>> 
>> Any Python package that you install with brew will install the Homebrew 
>> Python build. Start using that one, and forget about the system Python 
>> install.
>> From brew you get `/usr/local/bin/python3`, and then you can freely `python3 
>> -m pip install `
>> I further symlink `/usr/local/bin/{python|pip}3` to `~/bin/{python|pip}`, so 
>> I don't have to deal ever again with the EOLed system Apple Python 2.
>> 
>> -- 
>> Lisandro Dalcin
>> 
>> Research Scientist
>> Extreme Computing Research Center (ECRC)
>> King Abdullah University of Science and Technology (KAUST)
>> http://ecrc.kaust.edu.sa/ 
> 



Re: [petsc-dev] Prediscusion of appropriate communication tool for discussion of PETSc 4 aka the Grand Refactorization

2020-06-19 Thread Patrick Sanan
The Gitlab wiki (on whichever repo) might also be a good complement to
whichever thread-based option is used.

 schrieb am Fr. 19. Juni 2020 um 20:39:

> I'd expect we'd have a handful of issues with a common label. Easy to
> customize notifications. I don't see the point of a special repository
> except that it becomes less discoverable.
>
> On Jun 19, 2020 12:25, Hapla Vaclav  wrote:
>
> Why not have a separate project within the same group
> https://gitlab.com/petsc? That would allow separate notification
> settings, for instance. Or the GitLab's Snippets feature mentioned by Jacob
> - I can imagine they might be confusing within the current repo if they
> would refer to a future API.
>
> That new repo can be kept forever for reference, if preferred. I don't see
> why it couldn't be referred to later.
>
> Anyway, Epics would be cool even for the current development.
>
> Vaclav
>
> On 19 Jun 2020, at 20:14, j...@jedbrown.org wrote:
>
> GitLab has Epics for managing related issues (we'd have to request
> community project status to activate it). I don't know if that feature
> helps facilitate what you envision. If using present features, I would have
> one outline issue and an issue for each major component. I'd rather not
> create a new repository. The institutional knowledge in the discussion can
> be useful to refer to later.
>
> On Jun 19, 2020 12:03, Barry Smith  wrote:
>
>
>   We could create a new empty repository just to use the issue tracker,
> then we could have the discussion in multiple issues. (having links to
> PETSc code etc would then require full paths).
>
>   Each design topic, of which there will be dozens, would get its own
> issue and new topics are trivial added. People can watch the topics they
> care about. Plus an issue for general discussion.
>
>   Barry
>
>
> On Jun 19, 2020, at 12:57 PM, Jacob Faibussowitsch 
> wrote:
>
> I think a special GitLab issue (something akin #360 CI Tracker) would do
> the job quite nicely.
>
> I agree more with this. This also allows you to immediately see the list
> of linked MR’s and issues right in the conversation, as well as being able
> to link code snippets. One gripe however is that the issue becomes
> monolithic with multiple conversation threads (as you can see the CI error
> issue is a totally unstructured Smörgåsbord). To keep a more structured
> overview we should have multiple issues that are linked together.
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
>
> On Jun 19, 2020, at 12:34 PM, Hapla Vaclav 
> wrote:
>
> I like Slack but it does NOT have the full history in the free plan - it's
> limited to 10k messages.
>
> I think a special GitLab issue (something akin #360 CI Tracker) would do
> the job quite nicely.
>
> Vaclav
>
> On 19 Jun 2020, at 06:48, Jed Brown  wrote:
>
> I would prefer this mailing list or GitLab issues because they are
>
> 1. genuinely open to external participants,
> 2. more async-friendly for those in different timezones and folks with
> young kids, and
> 3. searchable and externally linkable (e.g., from merge requests and
> issues)
>
> If we need synchronous breakouts, we could do so, but there should be a
> summary back for those who couldn't participate synchronously.
>
> Barry Smith  writes:
>
>  I'd like to start a discussion of PETSc 4.0 aka the Grand Refactorization
> but to have that discussion we need to discuss what tool to use for that
> discussion.
>
>  So this discussion is not about PETSc 4.0, please don't discuss it here.
>
>  What do people recommend to use for the discussion
>
> * dedicated mailing list
> * slack channel(s)
> * zulip channel(s)
> * something else?
>
> I'd like a single tool that anyone can join at any time, see the full
> history, can attach files, search, not cost more money the we are already
> paying, etc.
>
> I expect this discussion to take maybe a week and then the actual
> discussion to take on the order of two months.
>
>  Thanks
>
>Barry
>
>
>
>
>
>
>
>
>


Re: [petsc-dev] Valgrind MPI-Related Errors

2020-06-02 Thread Patrick Sanan
It's not the most satisfying solution but you can also pretty quickly
generate and use suppression files to at least de-clutter the output:
https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress

Am Di., 2. Juni 2020 um 18:59 Uhr schrieb Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov>:

> MPICH need to be built with the option --enable-g=meminit for it to be
> valgrind clean.
>
> --download-mpich does this [among other things that are useful during
> software developement]. Pre-configured MPICH is not likely to do this.
>
> You can verify with: mpichversion
>
> You can prebuild MPICH using PETSc with:
>
> ./configure --prefix=$HOME/soft/mpich --download-mpich CFLAGS= FFLAGS=
> CXXFLAGS= COPTFLAGS= CXXOPTFLAGS= FOPTFLAGS=
>
> And make this your default pre-installed MPI. [by adding $HOME/soft/mpich
> to PATH]
>
> Satish
>
>
> On Tue, 2 Jun 2020, Jacob Faibussowitsch wrote:
>
> > Yes I am using the pre-loaded MPICH from the docker image. Further proof
> from configure
> >
> > #define PETSC_HAVE_MPICH_NUMVERSION 30302300
> > #define PETSC_HAVE_MPIEXEC_ENVIRONMENTAL_VARIABLE MPIR_CVAR_CH3
> >
> > Best regards,
> >
> > Jacob Faibussowitsch
> > (Jacob Fai - booss - oh - vitch)
> > Cell: (312) 694-3391
> >
> > > On Jun 2, 2020, at 11:35 AM, Junchao Zhang 
> wrote:
> > >
> > > I guess Jacob already used MPICH, since
> MPIDI_CH3_EagerContigShortSend() is from MPICH.
> > >
> > > --Junchao Zhang
> > >
> > >
> > > On Tue, Jun 2, 2020 at 9:38 AM Satish Balay via petsc-dev <
> petsc-dev@mcs.anl.gov > wrote:
> > > use --download-mpich for valgrind.
> > >
> > > https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind <
> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind>
> > >
> > > Satish
> > >
> > > On Tue, 2 Jun 2020, Karl Rupp wrote:
> > >
> > > > Hi Jacob,
> > > >
> > > > the recommendation in the past was to use MPICH as it is (was?)
> > > > valgrind-clean. Which MPI do you use? OpenMPI used to have these
> kinds of
> > > > issues. (My information might be outdated)
> > > >
> > > > Best regards,
> > > > Karli
> > > >
> > > > On 6/2/20 2:43 AM, Jacob Faibussowitsch wrote:
> > > > > Hello All,
> > > > >
> > > > > TL;DR: valgrind always complains about "Syscall param write(buf)
> points to
> > > > > uninitialised byte(s)” for a LOT of MPI operations in petsc code,
> making
> > > > > debugging using valgrind fairly annoying since I have to sort
> through a ton
> > > > > of unrelated stuff. I have built valgrind from source, used apt
> install
> > > > > valgrind, apt install valgrind-mpi to no avail.
> > > > >
> > > > > I am using valgrind from docker. Dockerfile is attached below as
> well. I
> > > > > have been unsuccessfully trying to resolve these local valgrind
> errors, but
> > > > > I am running out of ideas. Googling the issue has also not
> provided entirely
> > > > > applicable solutions. Here is an example of the error:
> > > > >
> > > > > $ make -f gmakefile test VALGRIND=1
> > > > > ...
> > > > > #==54610== Syscall param write(buf) points to uninitialised byte(s)
> > > > > #==54610==at 0x6F63317: write (write.c:26)
> > > > > #==54610==by 0x9056AC9: MPIDI_CH3I_Sock_write (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x9059FCD: MPIDI_CH3_iStartMsg (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x903F298: MPIDI_CH3_EagerContigShortSend (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x9049479: MPID_Send (in
> /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8FC9B2A: MPIC_Send (in
> /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8F86F2E: MPIR_Bcast_intra_binomial (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8EE204E: MPIR_Bcast_intra_auto (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8EE21F4: MPIR_Bcast_impl (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8F887FB: MPIR_Bcast_intra_smp (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8EE206E: MPIR_Bcast_intra_auto (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8EE21F4: MPIR_Bcast_impl (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x8EE2A6F: PMPI_Bcast (in
> /usr/local/lib/libmpi.so.12.1.8)
> > > > > #==54610==by 0x4B377B8: PetscOptionsInsertFile (options.c:525)
> > > > > #==54610==by 0x4B39291: PetscOptionsInsert (options.c:672)
> > > > > #==54610==by 0x4B5B1EF: PetscInitialize (pinit.c:996)
> > > > > #==54610==by 0x10A6BA: main (ex9.c:75)
> > > > > #==54610==  Address 0x1ffeffa944 is on thread 1's stack
> > > > > #==54610==  in frame #3, created by MPIDI_CH3_EagerContigShortSend
> (???:)
> > > > > #==54610==  Uninitialised value was created by a stack allocation
> > > > > #==54610==at 0x903F200: MPIDI_CH3_EagerContigShortSend (in
> > > > > /usr/local/lib/libmpi.so.12.1.8)
> > > > >
> > > > > 

Re: [petsc-dev] Filter -help

2020-04-10 Thread Patrick Sanan
Grep is the goto. If you are only interested in the help string, you can do

-help intro

Am Fr., 10. Apr. 2020 um 04:03 Uhr schrieb Mark Adams :

> I use grep.
>
> On Thu, Apr 9, 2020 at 9:11 PM Jacob Faibussowitsch 
> wrote:
>
>> Hello All,
>>
>> Is there any built-in way to filter -help (like -info)? Standard PETSc
>> -help dumps an ungodly amount of stuff and if using SLEPc it spits out 8x
>> as much.
>>
>> Best regards,
>>
>> Jacob Faibussowitsch
>> (Jacob Fai - booss - oh - vitch)
>> Cell: (312) 694-3391
>>
>>


[petsc-dev] Developers Documentation and Metadocumentation

2020-04-09 Thread Patrick Sanan
tl:dr developer docs are now at docs.petsc.org, will be looking for help in
moving the Users Manual

Hi PETSc dev -

I wanted to draw your attention to some work we've been doing with the
documentation.

The new home for developer's documentation is
https://docs.petsc.org/en/latest/developers/index.html

It currently includes all the information from the Developers Manual and
the developers page on the website. The aim is to migrate all the
information from the wiki , as
well (issue #596 ).

This is built with Sphinx and ReadTheDocs, which means that it should be
easy to contribute to and edit. Note that ReadTheDocs gives you a handy
menu in the bottom right which lets you navigate directly to the
reStructuredText (simple markup) source on GitLab, which should allow for
quick fixes (label docs-only).

While this is hopefully useful for the developer docs, *the main point is
to use this platform to get the Users Manual in a web-friendly format*.

Towards that aim, what we'll most need is for people to get involved in
porting chapters of the Users Guide. Just moving the existing information
is fine, but better is if an expert can update the information (ideally
making shorter) as they transfer it. A good primer for this is that there
are several chapters from the developers manual
that need
attention: outdated information needs to be removed, formatting for
tables/figures could be improved, and code snippets should include directly
from the PETSc source
,
instead of duplicating it. We'll try to put together some concrete
instructions on how to do the LaTeX-to-reStructuredText conversion shortly.

Finally, I will take this opportunity to soapbox about how I think we
should approach documentation. I am very taken with the analogy with a
bonsai tree (which I got from here
).
Docs should be very small, constantly pruned, and on display.

In a lot of ways, documentation is just like code. Don't add it unless it
serves a very clear purpose, realize that maintainability burden scales
with size, don't duplicate things, make it clean and minimal, think about
who your audience/users are, try to predict ways it will break later on,
etc.

One way that's it's very different from code is that it doesn't have to be
100% correct. Breaking the docs (briefly) is not catastrophic and is easily
detected. Thus, there's more room for boldness, quick integration, and
contributions from less-experienced people.


Re: [petsc-dev] [petsc-users] About the interpolation and restriction matrix for cell-centered multigrid.

2020-03-24 Thread Patrick Sanan
This sort of feedback is great in terms of learning what can move out of
"tutorials" and into "tests".
https://gitlab.com/petsc/petsc/-/merge_requests/2629


Am Di., 24. März 2020 um 16:22 Uhr schrieb Jed Brown :

> Mark Adams  writes:
>
> > Good question. It does look like there is Q1:
> >
> > src/dm/impls/da/da.c:-  ctype - DMDA_Q1 and DMDA_Q0 are currently the
> only
> > supported forms
> >
> > And in looking at a cell centered
> > example src/snes/examples/tutorials/ex20.c, it looks like only DMDA_Q1
> > works. I get an error when I set it to DMDA_Q0 (DMDA_Q1 is the default).
> > This is puzzling, Q0 is natural in cell centered.
>
> The comments in those examples are kinda wrong -- they never told the DM
> it was cell-centered so it uses a multigrid that isn't compatible with
> the boundary conditions.  The interpolation is Q1 on the dual grid, not
> conservative Q1 on cells.
>
> > I am not familiar with DMDA and I don't understand why, from ex20, that
> you
> > have an odd number of points on a cell centered grid and an even number
> for
> > vertex centered (eg, ex14). I would think that it should be the opposite.
>
> The example is bad.
>


Re: [petsc-dev] Suggestions for MatProductCreate()

2020-03-23 Thread Patrick Sanan
You can put whatever message you like when you deprecate the function, so
perhaps here you could leave the old functions and say something like

PETSC_DEPRECATED_FUNCTION("Use X() and Y() (since version 3.13)")
PetscErrorCode 

OldFunction();


and you can leave the man page but change the "level" to "deprecated",
which I don't think is usually done for simpler deprecations that are just
name changes, that look like this:

PETSC_DEPRECATED_FUNCTION("Use DMLocalToLocalBegin() (since version
3.5)") PETSC_STATIC_INLINE PetscErrorCode DMDALocalToLocalBegin(DM dm,Vec
g,InsertMode mode,Vec l) {return DMLocalToLocalBegin(dm,g,mode,l);}

(P.S. Once the web-based dev manual is merged, we could (easily, by editing
an .rst file on the web) make a separate section on deprecation since right
now it's buried in the style/usage guide, and doesn't have an example of
deprecating macros:
https://docs.petsc.org/en/psanan-docs-sphinx-dev-manual/developers/style.html#usage-of-petsc-functions-and-macros
)

Am Mo., 23. März 2020 um 15:56 Uhr schrieb Satish Balay via petsc-dev <
petsc-dev@mcs.anl.gov>:

> On Mon, 23 Mar 2020, hzhang--- via petsc-dev wrote:
>
> > Lisandro:
> >
> > > * Please consider fixing MatProductCreate(A,B,C,) to take ownership
> > > (that is, increase reference count) of the A,B, and the (optional) C
> > > matrices provided as arguments. Otherwise it is way easy to get into
> the
> > > dangling pointer trap.
> > >
> > Can you give me a simple example of " get into the dangling pointer
> trap"?
> > We do not use reference count to keep track of A, B for Mat-Mat
> operations
> > in the current and previous versions.
> >
> > >
> > > * A thing also missing in the new API is a way to "cleanup" the A,B,C
> > > references, something MatProductReset(D) to get rid of (deallocates)
> the
> > > internal "product" context, thus removing  from D the references to
> A,B,C.
> > > This would be useful if you just want to compute JUST the symbolic
> product,
> > > I'm using that in some code to compute the nonzero pattern of A^2.
> > >
> > Again, giving an example would help me understand. If you just want
> > the  symbolic product, you can call
> > MatProductCreate()
> > MatProductSetType()
> > MatProductSetFromOptions()
> > MatProductSymbolic().
> > This is equivalent to previous MatMatMultSymbolic(), and is used in some
> > routines of PETSc.
> >
> > >
> > > * It should be also considered to provide backward compatibility
> > > PETSC_DEPRECATED calls to the previous MatMatMultSymbolic()
> > > and MatMatMultNumeric(). It looks like it would be trivial to do,
> though I
> > > may be getting it wrong because I have not looked at all the details.
> > >
> >  MatMatMultSymbolic/Numeric() are not recommended for users, and few
> > developers ever used them.  I only see one or two PETSc subroutines call
> > them. I do not think we need provide backward compatibility
> > PETSC_DEPRECATED calls for 6 pairs of such routines.
>
> If there is a simple map from old API to new API [just new names, or
> reorder arguments] - we should include the old API with PETSC_DEPRECATED
>
> Alternative is PETSC_DEPRECATED with some error statement? Jed will know
> better..
>
>
> Satish
>
> >
> > Hong
> >
>
>


Re: [petsc-dev] lazygit

2020-03-21 Thread Patrick Sanan
This looks like it could be very helpful, thanks! I usually end up doing
these sorts of things (especially partial staging) with GitX, which works
well for me but only on OS X, and I'm not sure if anyone is actively
developing it anymore.

I installed from MacPorts and looks like it's available from lots of other
package managers.

Am Fr., 20. März 2020 um 22:08 Uhr schrieb Lisandro Dalcin <
dalc...@gmail.com>:

> Folks, I recommend this terminal-based UI for git:
> https://opensource.com/article/20/3/lazygit
>
> It is quite easy to do line-level stage, amend, rebase, fixup/squash,
> cherry-pick, etc. all without leaving the terminal.
>
> In https://github.com/jesseduffield/lazygit/releases, you can download a
> statically-linked binary (a bit heavy, ~ 15MB), just drop it in your ~/bin
> or ~/.local/bin, and you are ready to go.
>
> Profit!
>
>
>
> --
> Lisandro Dalcin
> 
> Research Scientist
> Extreme Computing Research Center (ECRC)
> King Abdullah University of Science and Technology (KAUST)
> http://ecrc.kaust.edu.sa/
>


Re: [petsc-dev] Truly minimal configure

2020-03-15 Thread Patrick Sanan



> Am 15.03.2020 um 16:06 schrieb Jed Brown :
> 
> Patrick Sanan  writes:
> 
>> I want to generate docs/manualpages/htmlmap as quickly as possible, from 
>> scratch (a clone on ReadTheDocs), so I want a fast configure which will let 
>> me run "make allcite". 
>> 
>> The below will work, I think, but I'm curious whether there's an even faster 
>> known way.
>> 
>> Also note that I had to explicitly turn off some MKL stuff, because it's 
>> included by default and depends on blaslapack. 
>> 
>> 
>>'./configure',
>>'--with-mpi=0',
>>'--with-blaslapack=0',
>>'--with-fortran=0',
>>'--with-cxx=0',
>>'--with-mkl_sparse_optimize=0',
>>'--with-mkl_sparse=0',
> 
> These checks should probably be conditional on having found blaslapack,
> but you don't have a working PETSc this way (it'll fail to link
> examples), and as far as I can tell, you're just using configure to
> install sowing and not need to hack the makefiles to do the tree
> traversal without including $PETSC_ARCH/lib/petsc/conf/petscvariables.
> 
> This seems fine and we can eventually move to calling doctext from
> Python or doing its job in Python (as part of the last step, moving man
> pages to Sphinx).
> 
> At a high level, we see here that the cost of make allcite is dominated
> by recursive make tree traversal.
> 
> Overhead  Command  Shared Object  
>   
>
>  35.81%  sh   libc-2.31.so
>   
>   
>  13.31%  make make
>   
>   
>  11.18%  make libc-2.31.so
>   
>   
>   7.63%  sh   ld-2.31.so  
>   
>   
>   5.84%  make ld-2.31.so  
>   
>   
>   4.20%  doctext  doctext 
>   
>   
>   4.07%  doctext  libc-2.31.so
>   
>   
>   3.88%  doctext  ld-2.31.so  
>   
>   
>   3.64%  sh   bash
>   
>   
>   2.79%  rm   libc-2.31.so
>   
>   
>   1.99%  rm   ld-2.31.so  
>   
>   
>   1.56%  sh   [unknown]   
>   
>   
>   1.14%  doctext  [unknown]   
>   
>   
>   0.56%  make [unknown]   
>   
>   
>   0.55%  python   libpython3.8.so.1.0
How'd you get this timing?

[petsc-dev] Truly minimal configure

2020-03-15 Thread Patrick Sanan
I want to generate docs/manualpages/htmlmap as quickly as possible, from 
scratch (a clone on ReadTheDocs), so I want a fast configure which will let me 
run "make allcite". 

The below will work, I think, but I'm curious whether there's an even faster 
known way.

Also note that I had to explicitly turn off some MKL stuff, because it's 
included by default and depends on blaslapack. 


'./configure',
'--with-mpi=0',
'--with-blaslapack=0',
'--with-fortran=0',
'--with-cxx=0',
'--with-mkl_sparse_optimize=0',
'--with-mkl_sparse=0',



Re: [petsc-dev] Request for comments: allow C99 internally

2020-03-07 Thread Patrick Sanan
Perhaps naively, I‘d assume that while there may well be someone out there
relying on compilers for which this would be a problem, that same person is
also less likely to be able to upgrade PETSc.

The benefits seem well worth it. It‘ll make things just that much easier to
work with.

+1 for the for-loop declarations.

No more need to police the use of // C++-style comments?



Jed Brown  schrieb am Sa. 7. März 2020 um 06:48:

> I have a question for petsc-dev: Do you know anyone who needs to build
> PETSc with a compiler that doesn't support variadic macros and for-loop
> declarations?  (Both of these are in C99 and C++11, and supported by all
> tested configurations including compilers that don't fully implement
> these standards.)  Both MPICH and Open MPI use variable-length arrays
> and for-loop declarations, so you'd be hard-pressed building a modern
> stack with such a compiler.  I'm not proposing that we put these macros
> unguarded into the public headers, so a user of PETSc could still build
> with -std=c89 and the like.
>
>
> ## Background
>
> There is a common pattern in PETSc where we write
>
>   PetscInt some,several,variables;
>
>   // code
> #if defined(PETSC_HAVE_MAGIC)
>   function(several,);
> #endif
>   use(some,variables);
>
>
> Of course this gives unused variable warnings, so we tear our code apart
> like
>
>   PetscInt some,variables;
> #if defined(PETSC_HAVE_MAGIC)
>   PetscInt several;
> #endif
>
>   // code
> #if defined(PETSC_HAVE_MAGIC)
>   function(several,);
> #endif
>   use(some,variables);
>
>
> but the bigger problem is that we need different configurations just to
> check syntax of our compiled out blocks.  I propose allowing variadic
> macros (a C99 and C++11 feature) to allow code like
>
>   PetscInt some,several,variables;
>
>   // code
>   if (PetscDefined(HAVE_MAGIC)) {
> function(several,);
>   }
>   use(some,variables);
>
>
> This approach could also be used to avoid needing separate macros for
> every SETERRQ1-SETERRQ9, etc.  I have an example implementation in this
> MR, and it passes the full pipeline (after relaxing the -std=c89
> -pedantic build).
>
> https://gitlab.com/petsc/petsc/-/merge_requests/157/diffs
>
>
> We could also consider allowing for-loop declarations, which I believe
> leads to tighter and more understandable code because the reader doesn't
> have to wonder whether the variable is used after the loop.
>
>   for (PetscInt i=0; i
>
> Note that we cannot use variable-length arrays (VLA) because they are
> not in the intersection of C and C++.
>


Re: [petsc-dev] ccache tips?

2020-01-20 Thread Patrick Sanan
Thanks a lot for all the tips! I'm trying things out now with an approach like 
Jed's, but attempting to use system MPI instead of a specific build of MPICH.


> Am 17.01.2020 um 22:13 schrieb Jed Brown :
> 
> "Balay, Satish"  writes:
> 
>> I guess you can just create links [from mpicc to ccache] instead of
>> these wrapper scripts in these locations - with the same ccache
>> performance?
> 
> I don't because I have multiple mpicc at different paths.
> 
>> And with the scripts [mpicc/mpicxx links to ccache] in PATH - you
>> don't need --with-mpi-dir [and mpiexec/include/lib work-around]
> 
> Sure, but then it's global (all calls to gcc or clang are processed via
> ccache).  Probably harmless, but may needlessly flush your cache.
> 
>> Also is there an advantage to have mpicc linked to ccache instead of
>> gcc? [I guess having both linked to ccache will result in duplicate
>> ccache searches for the same command?]
> 
> I want the precision of gcc with/without ccache and (OMPI or MPICH)
> mpicc with/without ccache.  Other solutions are good if you just want
> ccache all the time.



[petsc-dev] ccache tips?

2020-01-17 Thread Patrick Sanan
I'm shamefully not using ccache. How do I do it? Is it as simple as ./configure 
--with-cc="ccache gcc" --with-cxx="ccache g++"? Works on OS X and various 
Linuxes? Any known issue with external packages or otherwise? 

Re: [petsc-dev] Valgrind problems

2019-12-07 Thread Patrick Sanan
I was actually wondering about this, as in some cases valgrind errors appear 
and sometimes they don't, but I didn't dig into it too deeply.

Here's my workaround, FWIW, which shows some output for that test on master.

I don't see any output when I just run the tests like this:

VALGRIND=1 make -f gmakefile.test test 
globsearch="dm_impls_plex_tests-ex1_fluent_2"

But I do see something if I do this to find any non-empty .err files:

find $PETSC_ARCH/tests -name *.err ! -size 0

And then I see these valgrind warnings after copy-pasting the path:


$ cat 
arch-master-extra-opt/tests/dm/impls/plex/examples/tests/runex1_fluent_2/runex1_fluent_2.err
==4990== Conditional jump or move depends on uninitialised value(s)
==4990==at 0x4C3705A: rawmemchr (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==4990==by 0x7687351: _IO_str_init_static_internal (strops.c:41)
==4990==by 0x767878C: vsscanf (iovsscanf.c:40)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==
==4990== Use of uninitialised value of size 8
==4990==at 0x766276F: _IO_vfscanf (vfscanf.c:633)
==4990==by 0x767879C: vsscanf (iovsscanf.c:41)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==
==4990== Conditional jump or move depends on uninitialised value(s)
==4990==at 0x766277B: _IO_vfscanf (vfscanf.c:630)
==4990==by 0x767879C: vsscanf (iovsscanf.c:41)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==
> Am 07.12.2019 um 21:45 schrieb Matthew Knepley :
> 
> I am trying to clean up valgrind errors. However this one
> 
>   dm_impls_plex_tests-ex1_fluent_2
> 
> is valgrind clean on my machine. Does anyone get it to output something?
> 
>   Thanks,
> 
>  Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



[petsc-dev] Sowing: proposed change to allow single-entry lists on man pages

2019-06-15 Thread Patrick Sanan via petsc-dev
Lists on the man pages don't seem to be able to have a single entry,
because sowing requires you to start lists with "+" and end them with "-",
requiring at least two entries.

This leads to ugly-looking indentation for man pages for functions with a
single input or output parameter, e.g.
https://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DMSTAG/DMStagVecGetArrayDOF.html

I think that this can be remedied with a small change to sowing (doctext),
to interpret a lone '-' as opening a list (and then closing it after one
entry):
https://bitbucket.org/psanan/sowing/commits/780ea53824388e8c6089ae2d6210332c63935edb

(Posting this here since I'm not sure how closely the
bitbucket.org/petsc/pkg-sowing repository is monitored)


Re: [petsc-dev] [petsc4py] Broken tests for DMStag

2019-06-13 Thread Patrick Sanan via petsc-dev
I will attempt to just fix the underlying issue here, that is allow
DMStagSetUniformCoordinates() to work for period boundary conditions with a
single rank in the corresponding dimension.

Am Mi., 12. Juni 2019 um 19:10 Uhr schrieb Patrick Sanan <
patrick.sa...@gmail.com>:

> Ah, okay, so unless something weird is going on, that probably should
> never have passed (if it's asking for periodic BCs with one rank in a given
> direction, and INSERT_VALUES, so it's not well-defined which of the
> multiple local dof mapping to a given global dof should be used). I'll try
> to reproduce and fix it and/or talk to Chris.
>
> Am Mi., 12. Juni 2019 um 18:27 Uhr schrieb Balay, Satish via petsc-dev <
> petsc-dev@mcs.anl.gov>:
>
>> On Wed, 12 Jun 2019, Lisandro Dalcin via petsc-dev wrote:
>>
>> > $ python test/runtests.py -v -i dmstag TestDMStag_2D_PXY
>> > [0@kl-18232] Python 2.7 (/usr/local/opt/python@2/bin/python2.7)
>> > [0@kl-18232] PETSc 3.11.2 development (conf: 'arch-darwin-c-debug')
>> > [0@kl-18232] petsc4py 3.11.0
>> (build/lib.macosx-10.14-x86_64-2.7/petsc4py)
>> > testCoordinates (test_dmstag.TestDMStag_2D_PXY) ... ERROR
>> > testDMDAInterface (test_dmstag.TestDMStag_2D_PXY) ... ERROR
>> > testDof (test_dmstag.TestDMStag_2D_PXY) ... ok
>> > testGetOther (test_dmstag.TestDMStag_2D_PXY) ... ok
>> > testGetVec (test_dmstag.TestDMStag_2D_PXY) ... ERROR
>> > testMigrateVec (test_dmstag.TestDMStag_2D_PXY) ... ERROR
>>
>> I get:
>>
>> ==
>> ERROR: testCoordinates (test_dmstag.TestDMStag_2D_PXY)
>> --
>> Traceback (most recent call last):
>>   File "test/test_dmstag.py", line 39, in testCoordinates
>> self.da.setUniformCoordinates(0,1,0,1,0,1)
>>   File "PETSc/DMStag.pyx", line 255, in
>> petsc4py.PETSc.DMStag.setUniformCoordinates
>> Error: error code 56
>> [0] DMStagSetUniformCoordinates() line 1077 in
>> /home/balay/petsc.z/src/dm/impls/stag/stagutils.c
>> [0] DMStagSetUniformCoordinatesExplicit() line 1118 in
>> /home/balay/petsc.z/src/dm/impls/stag/stagutils.c
>> [0] DMStagSetUniformCoordinatesExplicit_2d() line 135 in
>> /home/balay/petsc.z/src/dm/impls/stag/stag2d.c
>> [0] DMLocalToGlobalBegin() line 2614 in
>> /home/balay/petsc.z/src/dm/interface/dm.c
>> [0] DMLocalToGlobalBegin_Stag() line 230 in
>> /home/balay/petsc.z/src/dm/impls/stag/stag.c
>> [0] No support for this operation for this object type
>> [0] Local to Global scattering with INSERT_VALUES is not supported for
>> single rank in a direction with boundary conditions (e.g. periodic)
>> inducing a non-injective local->global map. Either change the boundary
>> conditions, use a stencil width of zero, or use more than one rank in the
>> relevant direction (e.g. -stag_ranks_x 2)
>>
>> Satish
>>
>>


Re: [petsc-dev] [petsc4py] Broken tests for DMStag

2019-06-12 Thread Patrick Sanan via petsc-dev
Do you know if these arose because of the recent example bug fix, or have
they potentially been around for a while?


Am Mi., 12. Juni 2019 um 04:58 Uhr schrieb Lisandro Dalcin via petsc-dev <
petsc-dev@mcs.anl.gov>:

> $ python test/runtests.py -v -i dmstag TestDMStag_2D_PXY
> [0@kl-18232] Python 2.7 (/usr/local/opt/python@2/bin/python2.7)
> [0@kl-18232] PETSc 3.11.2 development (conf: 'arch-darwin-c-debug')
> [0@kl-18232] petsc4py 3.11.0 (build/lib.macosx-10.14-x86_64-2.7/petsc4py)
> testCoordinates (test_dmstag.TestDMStag_2D_PXY) ... ERROR
> testDMDAInterface (test_dmstag.TestDMStag_2D_PXY) ... ERROR
> testDof (test_dmstag.TestDMStag_2D_PXY) ... ok
> testGetOther (test_dmstag.TestDMStag_2D_PXY) ... ok
> testGetVec (test_dmstag.TestDMStag_2D_PXY) ... ERROR
> testMigrateVec (test_dmstag.TestDMStag_2D_PXY) ... ERROR
>
>
>
> --
> Lisandro Dalcin
> 
> Research Scientist
> Extreme Computing Research Center (ECRC)
> King Abdullah University of Science and Technology (KAUST)
> http://ecrc.kaust.edu.sa/
>


Re: [petsc-dev] User(s) manual sections field in manual pages?

2019-06-12 Thread Patrick Sanan via petsc-dev
I've just learned to use google or, when offline, to go to my local version
of
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html
and use command-F. The doxygen search box is nice, though - if we want
that, do we have to commit to also using Doxygen for the manual?

Am Mi., 12. Juni 2019 um 15:43 Uhr schrieb Smith, Barry F. <
bsm...@mcs.anl.gov>:

>
>   Oana had the very legitimate complaint that we don't have search box for
> manual pages etc. One advantage of "post-processing" the manual pages to
> something like Doxygen is that I think you get nice searchs (you start
> typing and it makes suggestions) "for free". Otherwise do people have
> suggestions on how to set up a good search box (not one that just goes to
> google :-))
>
>
>
>
> > On Jun 12, 2019, at 7:53 AM, Jed Brown  wrote:
> >
> > Thanks for starting this thread, Patrick.  For the users manual, my plan
> > is to do some mild cleanup so that pandoc can convert it to Markdown or
> > rST.  From there, the two systems I'm familiar with are Sphinx and
> > Bookdown.  Everyone has seen Sphinx output on readthedocs sites.
> > Bookdown is fast and produces results like this (plus PDF and ePUB):
> >
> >  https://mc-stan.org/docs/2_19/reference-manual/
> >
> > Either way, I'll need to write a parser for Sowing man pages (in Python
> > because that's what people are familiar with and have installed).  I'm
> > not wild about converting all the man pages (in source files) to a
> > "standard" format (of which Doxygen is the most popular for C and C++)
> > because that would be a ton of churn for little benefit and I think
> > Doxygen syntax is no better than Sowing.
> >
> > My rough estimate is that Sphinx would take less work/customization than
> > Bookdown for the man pages, but more work for the users manual.
> >
> > I'd be interested to hear further thoughts.
> >
> > "Ham, David A via petsc-dev"  writes:
> >
> >> Firedrake is a very happy Sphinx user. Of course our primary language
> is Python. I’m not sure how wonderful sphinx is if your primary language is
> C (though support is, I believe, claimed).
> >>
> >> From: petsc-dev  on behalf of Patrick
> Sanan via petsc-dev 
> >> Reply-To: Patrick Sanan 
> >> Date: Wednesday, 12 June 2019 at 10:10
> >> To: "Smith, Barry F." 
> >> Cc: petsc-dev 
> >> Subject: Re: [petsc-dev] User(s) manual sections field in manual pages?
> >>
> >> (and another potential option is to use a tool to convert the current
> latex source or rendered pdf to HTML)
> >>
> >> Am Mi., 12. Juni 2019 um 09:40 Uhr schrieb Patrick Sanan <
> patrick.sa...@gmail.com<mailto:patrick.sa...@gmail.com>>:
> >> I'm interested to hear more about this plan to refactor the user's
> manual! In particular, is there a concensus on what's a good alternative to
> LaTeX?
> >>
> >> I got to chat with one of the developers of deal.ii yesterday, which
> was cool - this is of course an example of high quality documentation, and
> uses Doxygen. We've also discussed Sphinx and Madoko in the past. It's also
> not out of the question to avoid heavy dependencies and consider something
> custom, akin to the current HTML generation approach for the man pages and
> other docs on the website.
> >>
> >> Am Sa., 8. Juni 2019 um 09:33 Uhr schrieb Smith, Barry F. <
> bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>>:
> >>
> >>  This was one of my many dreams. The sections in the users manual would
> have latex names and each man page would link to appropriate ones. Given
> the hopelessness of linking inside PDF documents on the web (in theory it
> is possible but no browsers support it) I gave up on it. You can remove
> these. With Jed's plans this summer to refactor the users manual to not use
> latex this all becomes possible but we'll want some automated way of doing
> this, not requiring listing links on each manual page.
> >>
> >>   Barry
> >>
> >>
> >>> On Jun 8, 2019, at 1:09 AM, Mills, Richard Tran via petsc-dev <
> petsc-dev@mcs.anl.gov<mailto:petsc-dev@mcs.anl.gov>> wrote:
> >>>
> >>> Colleagues,
> >>>
> >>> I have noticed that we have a "Users manual sections" section in the
> MatNullSpaceCreate() manual page, and an empty "User manual sections"
> section (which I suppose should be corrected to "Users manual sections",
> since it is officially the "PETSc Users Manual"). Those appear to be the
> only two manual pages that use these headings. Would we like to add these
> for other manual pages, or, since they appear to be unused, should we
> eliminate them?
> >>>
> >>> --Richard
>
>


Re: [petsc-dev] Man pages usage of "Collective on XXX"

2019-02-07 Thread Patrick Sanan via petsc-dev
(Forgot to reply-all before)

I'd propose to update the guidelines in the dev manual to say that unless
otherwise specified, collectivity is wrt the communicator associated with
the PETSc object in the first argument slot.


Am Do., 7. Feb. 2019 um 10:35 Uhr schrieb Patrick Sanan <
patrick.sa...@gmail.com>:

>
>
> Am Mi., 6. Feb. 2019 um 21:09 Uhr schrieb Matthew Knepley via petsc-dev <
> petsc-dev@mcs.anl.gov>:
>
>> On Wed, Feb 6, 2019 at 3:03 PM Dave May via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>>
>>> * I notice that most man pages will say
>>>   Collective on 
>>> e.g.
>>>
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDACreate.html
>>>
>>> * Some others say
>>>   Collective on 
>>>
>>> e.g.
>>>
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDACreateNaturalVector.html
>>>
>>> or
>>>
>>>
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCompositeAddDM.html
>>>
>>> In the former, at least the word "DMDA" gets linked back to the
>>> implementation, whilst in the latter "DMComposite" does not.
>>>
>>> Should "Collective on " be avoided?
>>> It is potentially somewhat unclear given that the name of the
>>> implementation does not appear anywhere in the arg  list (type or variable
>>> name).
>>>
>>> That said, "collective on " could be similarly criticized if a
>>> method existed with two args of the same type.
>>>
>>> * Many of the methods in this file
>>>
>>>   www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/shell/dmshell.c.html
>>>
>>> simply say "Collective" (without a type or implementation name), or they
>>> say "Logically Collective on XXX"
>>>
>>> I do realize that there is a pattern that the statement "collective on
>>> xxx" or "not collective" applies (implicitly) to the first argument of any
>>> PETSc function call (at least that I've come across) so possibly just
>>> indicating the method as "Collective" might suffice (assuming (i) there is
>>> a pattern and (ii) everyone knows about the pattern).
>>>
>>> Q: Should I make a PR to unify these man pages (and any others I spot)
>>> to just say "Collective on "?
>>>
>>
>> This has always bugged me. It should say, I think, 'Collective on > name>", or "Logically collective on ".
>>
>
> I agree - ultimately I think we're just trying to say "this operation is
> [logically] collective wrt the MPI communicator associated with object
> XXX", so specifying this with respect to an argument makes the most sense.
> Right now the dev manual says "class XXX" which seems potentially ambiguous
> (for instance you could have two arguments for local/global Vecs living on
> different communicators).
>
> In terms of reducing clutter and making things more maintainable, I would
> support explicitly adopting the convention that if no argument is specified
> (e.g. just "Collective"), then this refers to the first argument - I think
> this is very intuitive for class methods (e.g. DMFoo(DM dm,..,) is going to
> be collective or not wrt the communicator associated with "dm").
>
>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks,
>>>   Dave
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>


Re: [petsc-dev] Code of Conduct [ACTION REQUIRED]

2018-10-23 Thread Patrick Sanan
In the first paragraph, with the list of characteristics that should not
trigger harassment, suggest to add "native language" or similar, to
underscore the point that, while the working language is English, the
important things are the ideas, not the particulars of usage.

Am Di., 23. Okt. 2018 um 12:52 Uhr schrieb Karl Rupp :

> Dear PETSc folks,
>
> I ask all members of the PETSc team to review the following proposal for
> adopting a code of conduct:
>
>
> https://bitbucket.org/petsc/petsc/pull-requests/1196/code-of-conduct-adopt-contributor-covenant/diff
>
> If you have questions, concerns, etc., please reply to this email thread.
>
> ACTION REQUIRED: If you agree with adopting the proposed Code of
> Conduct, please click on "Approve" on the pull request webpage. This
> signals that the whole team agrees to and respects the code of conduct.
>
> Thanks and best regards,
> Karli
>


Re: [petsc-dev] large bug in KSPSolve_Chebyshev() affects all multigrid solvers; all core developers including Mark please read this

2018-09-20 Thread Patrick Sanan
2018-09-20 12:00 GMT+02:00 Mark Adams :

>
>
> On Wed, Sep 19, 2018 at 7:44 PM Smith, Barry F. 
> wrote:
>
>>
>> Look at the code in KSPSolve_Chebyshev().
>>
>> Problem 1) VERY MAJOR
>>
>> Once you start running the eigenestimates it always runs them, this
>> is because the routine begins with
>>
>>   if (cheb->kspest) {
>>
>>but once cheb->kspest is set it is never unset. This means, for
>> example, that every time PCMG runs the smoother that uses Chebyshev it runs
>> the eigenestimator (which uses GMRES) (even when it is suppose to be just
>> smoothing since the eigenestimates were already made in the setup stage).
>> This is totally wrong.
>
>
> Yikes, does this code (a few lines down) address this?
>
> if (amatid != cheb->amatid || pmatid != cheb->pmatid || amatstate !=
> cheb->amatstate || pmatstate != cheb->pmatstate) {
>
> Maybe you could run with CG as the outer solver and check that the number
> of GMRES solve calls (maybe with GMRESOrtho/max_it) is equal to the number
> of SNES iterations * (number of levels - 1).
>

If I run this

./ex19 -snes_monitor -ksp_view -pc_type mg -ksp_type gcr -pc_mg_levels 2

then I have one Chebyshev smoother (fine grid) and 2 SNES iterations, so
I'd expect 2 calls to KSPSolve_GMRES. That is what I see when I set a
breakpoint in gdb for KSPSolve_GMRES.



>
>> Sure enough, if I run, for example, src/snes/examples/tutorials/ex19.c
>> with -pc_type gamg I see in the debugger that GMRES is being called by
>> KSPSolve_Chebyshev as it smooths. For example,
>>
>> 0  MatSOR (mat=0x28689f0, b=0x29ee310, omega=1, flag=28, shift=0, its=1,
>> lits=1, x=0x29f4070)
>> at /sandbox/bsmith/petsc/src/mat/interface/matrix.c:3913
>> #1  0x7f59d2e353b9 in PCApply_SOR (pc=0x29aa770, x=0x29ee310,
>> y=0x29f4070)
>> at /sandbox/bsmith/petsc/src/ksp/pc/impls/sor/sor.c:31
>> #2  0x7f59d2fa6a7b in PCApply (pc=0x29aa770, x=0x29ee310, y=0x29f4070)
>> at /sandbox/bsmith/petsc/src/ksp/pc/interface/precon.c:462
>> #3  0x7f59d2faa6a7 in PCApplyBAorAB (pc=0x29aa770, side=PC_LEFT,
>> x=0x29f11c0, y=0x29f4070, work=0x29ee310)
>> at /sandbox/bsmith/petsc/src/ksp/pc/interface/precon.c:691
>> #4  0x7f59d3084d46 in KSP_PCApplyBAorAB (ksp=0x29c4d30, x=0x29f11c0,
>> y=0x29f4070, w=0x29ee310)
>> at /sandbox/bsmith/petsc/include/petsc/private/kspimpl.h:309
>> #5  0x7f59d3086874 in KSPGMRESCycle (itcount=0x7ffd3d60143c,
>> ksp=0x29c4d30)
>> at /sandbox/bsmith/petsc/src/ksp/ksp/impls/gmres/gmres.c:152
>> #6  0x7f59d3087352 in KSPSolve_GMRES (ksp=0x29c4d30) at
>> /sandbox/bsmith/petsc/src/ksp/ksp/impls/gmres/gmres.c:234
>> #7  0x7f59d30fae94 in KSPSolve (ksp=0x29c4d30, b=0x29dc900,
>> x=0x29d9a30)
>> at /sandbox/bsmith/petsc/src/ksp/ksp/interface/itfunc.c:780
>> #8  0x7f59d306a1e1 in KSPSolve_Chebyshev (ksp=0x29a9550) at
>> /sandbox/bsmith/petsc/src/ksp/ksp/impls/cheby/cheby.c:367
>> #9  0x7f59d30fae94 in KSPSolve (ksp=0x29a9550, b=0x28653d0,
>> x=0x2906a70)
>> at /sandbox/bsmith/petsc/src/ksp/ksp/interface/itfunc.c:780
>> #10 0x7f59d2f59042 in PCMGMCycle_Private (pc=0x2832fd0,
>> mglevelsin=0x2944b88, reason=0x0)
>> at /sandbox/bsmith/petsc/src/ksp/pc/impls/mg/mg.c:20
>> #11 0x7f59d2f5e350 in PCApply_MG (pc=0x2832fd0, b=0x28653d0,
>> x=0x2906a70)
>> at /sandbox/bsmith/petsc/src/ksp/pc/impls/mg/mg.c:377
>> #12 0x7f59d2fa6a7b in PCApply (pc=0x2832fd0, x=0x28653d0, y=0x2906a70)
>> at /sandbox/bsmith/petsc/src/ksp/pc/interface/precon.c:462
>> #13 0x7f59d31242d7 in KSP_PCApply (ksp=0x27df750, x=0x28653d0,
>> y=0x2906a70)
>> at /sandbox/bsmith/petsc/include/petsc/private/kspimpl.h:281
>> #14 0x7f59d31251ba in KSPInitialResidual (ksp=0x27df750,
>> vsoln=0x28610d0, vt1=0x28ff7b0, vt2=0x2903450,
>> vres=0x2906a70, vb=0x28653d0) at /sandbox/bsmith/petsc/src/ksp/
>> ksp/interface/itres.c:67
>> #15 0x7f59d30872ef in KSPSolve_GMRES (ksp=0x27df750) at
>> /sandbox/bsmith/petsc/src/ksp/ksp/impls/gmres/gmres.c:233
>> #16 0x7f59d30fae94 in KSPSolve (ksp=0x27df750, b=0x28653d0,
>> x=0x28610d0)
>> at /sandbox/bsmith/petsc/src/ksp/ksp/interface/itfunc.c:780
>> #17 0x7f59d3291d32 in SNESSolve_NEWTONLS (snes=0x26f2550) at
>> /sandbox/bsmith/petsc/src/snes/impls/ls/ls.c:224
>> #18 0x7f59d320f7da in SNESSolve (snes=0x26f2550, b=0x0, x=0x285d440)
>>
>>
> Can this just be the first time it is called, so it is doing the setup?
>
>
>>   Not only is the code wrong but it is also a huge inefficiency in the
>> code running all these unneeded GMRES.
>>
>
> Just to be clear, this is inefficiency, but I don't see why it is
> (mathematically) wrong.
>
>
>>   Problem 2) Less catastrophic
>>
>>   When cheb->kspest is set the "regular" Chebyshev is also run (after the
>> eigenvalues are estimated).
>
>
> I don't see that. I see:
>
> static PetscErrorCode KSPSolve_Chebyshev(KSP ksp)
> .
>   if (cheb->kspest) {
> .
> if (amatid != cheb->amatid || pmatid != cheb->pmatid || amatstate !=
> 

Re: [petsc-dev] Undefined symbols for _kspfgmresmodifypcksp_ and _kspfgmresmodifypcnochange_ when rebuilding

2018-08-22 Thread Patrick Sanan
Thanks all! "make allfortranstubs && make" is certainly practical enough
for me. I'd naively been assuming that "make deletefortranstubs && make"
would have the same effect.

2018-08-22 13:09 GMT+02:00 Jose E. Roman :

>
>
> > El 22 ago 2018, a las 12:52, Matthew Knepley 
> escribió:
> >
> > On Wed, Aug 22, 2018 at 6:35 AM Lawrence Mitchell 
> wrote:
> >
> > > On 22 Aug 2018, at 10:04, Patrick Sanan 
> wrote:
> > >
> > > This happens fairly frequently when I try to switch/update branches of
> PETSc (here invoked by building my own code, but the error message looks
> the same with "make check"):
> > >
> > > $ make
> > > /Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc
> -o runme.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
> -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3
>  -I/Users/patrick/petsc-stagbl/include -I/Users/patrick/petsc-stagbl/
> arch-darwin-stagbl-double-extra-debug/include -I/opt/X11/include
> `pwd`/runme.c
> > > /Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc
> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
> -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
> -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
> -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
> -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
> -fstack-protector -fvisibility=hidden -g3  -o runme runme.o
> -Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
> -L/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
> -Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
> -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -Wl,-rpath,/opt/local/lib/
> gcc7/gcc/x86_64-apple-darwin17/7.3.0 
> -L/opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0
> -Wl,-rpath,/opt/local/lib/gcc7 -L/opt/local/lib/gcc7 -lpetsc -lcmumps
> -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack
> -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig
> -lsuperlu_dist -lHYPRE -lsundials_cvode -lsundials_nvecserial
> -lsundials_nvecparallel -llapack -lblas -lparmetis -lmetis -lX11 -lyaml
> -lstdc++ -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lstdc++ -ldl
> > > Undefined symbols for architecture x86_64:
> > >   "_kspfgmresmodifypcksp_", referenced from:
> > >   import-atom in libpetsc.dylib
> > >   "_kspfgmresmodifypcnochange_", referenced from:
> > >   import-atom in libpetsc.dylib
> > > ld: symbol(s) not found for architecture x86_64
> > > collect2: error: ld returned 1 exit status
> > >
> > > I don't know why this is, exactly. Maybe it's more obvious from the
> perspective of someone more expert on the Fortran interface, and we could
> save some time reconfiguring (if these two symbols are really the only
> issue).
> > >
> > >  For these two symbols, the corresponding functions are declared but
> not defined in
> > >
> > > src/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.c
> > >
> > > "make deletefortranstubs" by itself doesn't seem to solve the problem.
> My sledgehammer workaround is to do everything short of blowing away my
> entire $PETSC_ARCH directory:
> > >
> > > make deletefortranstubs && make allclean && make reconfigure &&
> make && make check
> >
> >
> > Does it work to do:
> >
> > make allfortranstubs && make
> >
> > In these cases?
> >
> > Lawrence is correct. Here is what is happening.
> >
> > Someone changes an interface, and you pull the change. The header
> changes will cause all the C files
> > using that API to rebuild. However, the doc system (sowing) runs bfort
> on the C file to generate the Fortran
> > binding. It runs on all headers at once, so there is no separately rule
> for bforting a C file when it changes.
> > Things are now even worse, since we have Python code separate from bfort
> which create the Fortran
> > modules, which also will not fire on updates to the C file.
> >
> > The simplest fix is that you know that every time you see this problem,
> you rerun 'make allfortranstubs'.
> > The complicate fix is to rewrite bfort and the module generation into
> one program which respects the
> > dependency information. Since there is literally no credit associated
> with this job, it is unlikely ever to happen.
> > We await the passin

[petsc-dev] Undefined symbols for _kspfgmresmodifypcksp_ and _kspfgmresmodifypcnochange_ when rebuilding

2018-08-22 Thread Patrick Sanan
This happens fairly frequently when I try to switch/update branches of
PETSc (here invoked by building my own code, but the error message looks
the same with "make check"):

$ make
/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc
-o runme.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
-Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3
-I/Users/patrick/petsc-stagbl/include
-I/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/include
-I/opt/X11/include`pwd`/runme.c
/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc
-Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
-Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
-Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress
-Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
 -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas
-fstack-protector -fvisibility=hidden -g3  -o runme runme.o
-Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
-L/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
-Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
-Wl,-rpath,/opt/X11/lib -L/opt/X11/lib
-Wl,-rpath,/opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0
-L/opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0
-Wl,-rpath,/opt/local/lib/gcc7 -L/opt/local/lib/gcc7 -lpetsc -lcmumps
-ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack
-lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig
-lsuperlu_dist -lHYPRE -lsundials_cvode -lsundials_nvecserial
-lsundials_nvecparallel -llapack -lblas -lparmetis -lmetis -lX11 -lyaml
-lstdc++ -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lstdc++ -ldl
Undefined symbols for architecture x86_64:
  "_kspfgmresmodifypcksp_", referenced from:
  import-atom in libpetsc.dylib
  "_kspfgmresmodifypcnochange_", referenced from:
  import-atom in libpetsc.dylib
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status

I don't know why this is, exactly. Maybe it's more obvious from the
perspective of someone more expert on the Fortran interface, and we could
save some time reconfiguring (if these two symbols are really the only
issue).

 For these two symbols, the corresponding functions are declared but not
defined in

src/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.c

"make deletefortranstubs" by itself doesn't seem to solve the problem. My
sledgehammer workaround is to do everything short of blowing away my entire
$PETSC_ARCH directory:

make deletefortranstubs && make allclean && make reconfigure && make &&
make check

but I'm sure that's suboptimal, and in particular I'd like to avoid the
reconfigure.

Any useful community knowledge on this point?


Re: [petsc-dev] PETSc goes Jenkins

2018-07-20 Thread Patrick Sanan
This is super cool - thanks Alp and Karl!

Once tuning is complete, how is one intended to interpret the nice green
check marks? "The library compiles" or "All the tests passed"?

I ask because in the demo PR there is the reassuring check mark and "3 of 3
builds passed", even though failed tests are reported (timeouts).


2018-07-20 3:35 GMT+02:00 Karl Rupp :

> Hi all,
>
> we now have a first step towards full continuous integration via Jenkins
> completed. Thus, every new pull request that is (re-)based on a commit in
> master not older than today will be automatically tested with a subset of
> common tests that are intended to expose the most frequent issues. This, in
> particular, includes configurations with 64 bit integers as well as complex
> arithmetic.
>
> The integration of Jenkins into Bitbucket is smooth: You will notice on
> our demo pull request
>
> https://bitbucket.org/petsc/petsc/pull-requests/1039/jenkins
> file-for-build-pipelines-tied-to/diff
> that on the right it says "3 of 3 builds passed". If you click on the
> link, you will get further details on the individual builds and find
> further links to the test output stored on the Jenkins server.
>
> Implications on our development workflow: Currently 'next' gets (ab)used
> for all kinds of portability tests. As a consequence, every buggy merge
> clogs the whole integration pipeline, making it hard to integrate other
> PRs. With the Jenkins server in place, all pull requests will receive a
> good share of portability testing *before* they reach next. This reduces
> the burden on next, (hopefully) leading to faster code integration.
>
> Corollary: I strongly encourage all PETSc developers to use issue pull
> requests rather than merging to next directly (use your own judgment for
> exceptions!).
>
> Please note that we are still fine-tuning various aspects of the Jenkins
> infrastructure (location of the Jenkins server, which test nodes to use,
> which configurations to test, etc.). Most of these things are changes under
> the hood, though. If something still bubbles up and causes the testing to
> choke, please be considerate with us ;-)
>
> Finally, I'd like to explicitly thank Alp Dener for his help on getting
> Jenkins to run smoothly. Any credit should go to him.
>
> Best regards,
> Karli
>


Re: [petsc-dev] VTK viewer design question

2018-06-29 Thread Patrick Sanan
2018-06-29 17:15 GMT+02:00 Jed Brown :

> Stefano Zampini  writes:
>
> > Vec and DM classes should not be visible from Sys. This is why they are
> > PetscObject.
> > If they were visible, builds with --with-single-library=0 will be broken.
> >
> > 2018-06-29 17:06 GMT+03:00 Patrick Sanan :
> >
> >> I'm looking at the VTK viewer implementation  and I notice that
> >> PetscViewerVTKAddField() [1]
> >> accepts a parameter which, despite being called "dm", is of type
> >> PetscObject.
>
> I would not object to moving vtkv.c into src/dm -- it isn't actually
> usable without DM.
>

Yeah, it seems like moving this logic somehow into the DM package (either
by moving the entire thing, or by introducing another callback?) is the
natural thing to do.

>
> >> I think that this is used, amongst other things, to ensure that vectors
> >> being queued up to be written all come from the same DM.
> >>
> >> I'd like to relax this to only require that the vectors all come from
> >> *compatible* DMDAs, but this would require the DM API in vtkv.c.
>
> Why?  The function is developer level and there is VecGetDM() to give
> the correct DM.  I would rather that PetscViewerVTKAddField_VTK change
> this logic to merely check for compatibility:
>
>   if (vtk->dm) {
> if (dm != vtk->dm) SETERRQ(PetscObjectComm((
> PetscObject)viewer),PETSC_ERR_ARG_INCOMP,"Cannot write a field from more
> than one grid to the same VTK file");
>
This is what I actually do in my proof-of-concept hack. I hackily include
petscdm.h in vtkv.c, cast the arguments back to (DM), and use
DMGetCompatibility() instead of the check for identical DMs.

Later, I do use VecGetDM() to pull out the required dof/node, but this
happens in a callback defined in grvtk.c, which is in the DM package, so no
problem.


> >> My questions: is this argument of type PetscObject for any reason other
> >> than not wanting to bother including petscdm.h ? Might this be something
> >> other than a DM in some case (and in which case, why is the argument
> called
> >> "dm")? Am I missing a reason that I'll get into trouble eventually if I
> >> change this?
> >>
> >> (Similar question for the "vec" argument).
> >>
> >> [1] http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/
> >> Viewer/PetscViewerVTKAddField.html
> >>
> >
> >
> >
> > --
> > Stefano
>


[petsc-dev] VTK viewer design question

2018-06-29 Thread Patrick Sanan
I'm looking at the VTK viewer implementation  and I notice that
PetscViewerVTKAddField() [1]
accepts a parameter which, despite being called "dm", is of type
PetscObject.

I think that this is used, amongst other things, to ensure that vectors
being queued up to be written all come from the same DM.

I'd like to relax this to only require that the vectors all come from
*compatible* DMDAs, but this would require the DM API in vtkv.c.

My questions: is this argument of type PetscObject for any reason other
than not wanting to bother including petscdm.h ? Might this be something
other than a DM in some case (and in which case, why is the argument called
"dm")? Am I missing a reason that I'll get into trouble eventually if I
change this?

(Similar question for the "vec" argument).

[1] http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Viewer/
PetscViewerVTKAddField.html


Re: [petsc-dev] Matt's Tutorial Slides from PETSc Users' Meeting

2018-06-20 Thread Patrick Sanan
Sorry, I'm an idiot - I did indeed write down the wrong number and what I
wanted is there on slide 195.

2018-06-20 14:22 GMT+02:00 Matthew Knepley :

> On Wed, Jun 20, 2018 at 2:22 AM Patrick Sanan 
> wrote:
>
>> Matt, are your tutorial slides from the London Users' Meeting available?
>> I made a note check out something from there (the highly-automated saddle
>> point PC, which I'd like to use in an example/test) but either I wrote
>> something down wrong in my notes, or the slides you showed are different
>> from the last available slides on the website (CEMRACS 2016).
>>
>
> I did not put them up yet, but they are really identical to CEMRACS. I
> don't think I changed anything. What is not working?
>
>   Thanks,
>
> Matt
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>


[petsc-dev] Matt's Tutorial Slides from PETSc Users' Meeting

2018-06-20 Thread Patrick Sanan
Matt, are your tutorial slides from the London Users' Meeting available? I
made a note check out something from there (the highly-automated saddle
point PC, which I'd like to use in an example/test) but either I wrote
something down wrong in my notes, or the slides you showed are different
from the last available slides on the website (CEMRACS 2016).


Re: [petsc-dev] Testing system

2018-06-06 Thread Patrick Sanan
And for more info

make -f gmakefile.test help-test

2018-06-06 23:55 GMT+01:00 Jed Brown :

> Use search, as in
>
> make -f gmakefile test search=mat_%
>
> Often you'll want to be more specific.
>
> "Kong, Fande"  writes:
>
> > How to run tests for Mat examples only?
> >
> >  "make PETSC_ARCH=arch-darwin-c-opt-3.9.2  -f gmakefile.test test" run
> > everything.
> >
> > Thanks,
> >
> > Fande,
>


[petsc-dev] Tiny pull requests via Bitbucket web interface

2018-06-05 Thread Patrick Sanan
After Karl's nice talk on contributing to PETSc, I was reminded of a
similar talk that I saw at the Julia conference. The title was also
something like "Contributing is easy!" but used an even more extreme
example of making your first contribution. It was (as Barry encouraged) a
small documentation change, and they demonstrated how to do this via the
GitHub web interface.


This could be a great way to lower the "activation energy" for these kinds
of tiny, trivially-reviewable changes.


The practical steps with Bitbucket are approximately :

   - go to the PETSc Bitbucket site
   - navigate to the source file you want to change
   - "edit"
   - make sure you are at "master" (I had to select this from the
   pull-down, otherwise "edit" was greyed out and and gives a hint on
   mouseover)
   - make your small, innocuous edit
   - "commit"
   - select "create a pull request" if needbe, and fill out comments /
   reviewers as usual.

I believe that if you don't have write access, you can still do this and it
will create a fork for you automatically.


Here's a test:

https://bitbucket.org/petsc/petsc/pull-requests/975/docs-manual-makefile-fix-typo-in-error/diff


Thoughts? Should this be actively encouraged?


Re: [petsc-dev] [petsc-users] Segmentation Violation in getting DMPlex coordinates

2018-04-29 Thread Patrick Sanan
For functions like this (only for one impl), should this new check be
considered new best practices (as opposed to the composition approach,
defining things with names like DMDASetUniformCoordinates_DMDA())? It seems
like less boilerplate, as well as avoiding a function on the stack (and the
check itself if it's turned off in optimized mode).

2018-04-28 22:38 GMT+02:00 Smith, Barry F. :

>
>   Added runtime error checking for such incorrect calls in
> barry/dmda-calls-type-check
>
>
> > On Apr 28, 2018, at 9:19 AM, Matthew Knepley  wrote:
> >
> > On Sat, Apr 28, 2018 at 2:08 AM, Danyang Su 
> wrote:
> > Hi All,
> >
> > I use DMPlex and need to get coordinates back after distribution.
> However, I always get segmentation violation in getting coords values in
> the following codes if using multiple processors. If only one processor is
> used, it works fine.
> >
> > For each processors, the off value starts from 0 which looks good. I
> also tried 0-based index, which gives the same error. Would any one help to
> check what is wrong here?
> >
> >  idof   1 off   0
> >  idof   2 off   0
> >  idof   1 off   2
> >  idof   2 off   2
> >  idof   1 off   4
> >  idof   2 off   4
> >  idof   1 off   6
> >  idof   2 off   6
> >  idof   1 off   8
> >  idof   2 off   8
> >
> >
> >   DM :: distributedMesh, cda
> >   Vec :: gc
> >   PetscScalar, pointer :: coords(:)
> >   PetscSection ::  cs
> >
> >   ...
> >
> >   call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
> >   CHKERRQ(ierr)
> >
> >   call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
> >   CHKERRQ(ierr)
> >
> >   call DMGetDefaultSection(cda,cs,ierr)
> >   CHKERRQ(ierr)
> >
> >   call PetscSectionGetChart(cs,istart,iend,ierr)
> >   CHKERRQ(ierr)
> >
> >   !c get coordinates array
> >   call DMDAVecGetArrayF90(cda,gc,coords,ierr)
> >
> > You cannot call DMDA function if you have a DMPlex. You jsut call
> VecGetArrayF90()
> >
> >Matt
> >
> >   CHKERRQ(ierr)
> >
> >   do ipoint = istart, iend-1
> >
> > call PetscSectionGetDof(cs,ipoint,dof,ierr)
> > CHKERRQ(ierr)
> >
> > call PetscSectionGetOffset(cs,ipoint,off,ierr)
> > CHKERRQ(ierr)
> >
> > inode = ipoint-istart+1
> >
> > if (cell_coords == coords_xyz) then
> >   nodes(inode)%x = coords(off+1)
> >   nodes(inode)%y = coords(off+2)
> >   nodes(inode)%z = coords(off+3)
> > else if (cell_coords == coords_xy) then
> >   nodes(inode)%x = coords(off+1)
> >   nodes(inode)%y = coords(off+2)
> >   nodes(inode)%z = 0.0d0
> > else if (cell_coords == coords_yz) then
> >   nodes(inode)%x = 0.0d0
> >   nodes(inode)%y = coords(off+1)
> >   nodes(inode)%z = coords(off+2)
> > else if (cell_coords ==coords_xz) then
> >   nodes(inode)%x = coords(off+1)
> >   nodes(inode)%y = 0.0d0
> >   nodes(inode)%z = coords(off+2)
> > end if
> >   end do
> >
> >   call DMDAVecRestoreArrayF90(cda,gc,coords,ierr)
> >   CHKERRQ(ierr)
> >
> > Thanks,
> >
> > Danyang
> >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
>
>


Re: [petsc-dev] Figures 20, 21 in Chapter 13.1 of PETSc manual are out of date

2018-04-19 Thread Patrick Sanan
Sorry I didn't catch this when cleaning up the manual recently. If you
don't have time to update it yourself, let me know and I will.

2018-04-19 5:23 GMT+02:00 Karl Rupp :

> Hi Junchao,
>
> 1) The manual says the example is src/ksp/ksp/examples/ex10.c, but it
>> actually links to src/ksp/ksp/examples/tutorial/ex10.c. This is a minor
>> issue.
>> 2) One could not use the same command line options (-f0 medium -f1 arco6)
>> as shown in the figures. There are no such matrices so one can't simply
>> copy & paste.
>> 3) It seems ex10.c had gone through large changes and it can not produce
>> a log summary similar to the figures anymore (e.g., no stages like "Event
>> Stage 4: KSPSetUp 1" ).
>>
>
> such things (unfortunately) tend to happen over time. Please feel free to
> fix it :-)
>
> Best regards,
> Karli
>


Re: [petsc-dev] [petsc-users] PetscPrintf

2018-04-15 Thread Patrick Sanan
How about the logic of this analysis?

1. We are trying to use the same functions (in particular, PetscVFPrintf)
for two purposes:
   a. printing error messages (don't want to malloc)
   b. use by public API printing functions (don't want length restrictions)

2. Right now, PetscVFPrintf works fine for a but not for b. We could make
it work for b and not for a by malloc'ing a longer string.

3. Printing from error handlers happens through PetscErrorPrintf (default :
http://www.mcs.anl.gov/petsc/petsc-dev/src/sys/error/errtrace.c.html#PetscErrorPrintfDefault
), so if there's a special requirement for printing error messages, we can
impose it here.

A solution could then be something which skips the malloc only when
printing an error, e.g.

1. Add an argument to PetscVFPrintf (say "PetscBool noMalloc") [1]
2. API (PetscPrintf(), etc.) functions use noMalloc = PETSC_FALSE
3. error functions (PetscErrorPrintf() ) functions use noMalloc =
PETSC_TRUE


[1] And probably do the same thing in PetscVSNPrintf since, as Dr. Zhang
pointed out, this could also call malloc while handling an error, if the
string was long enough




2018-04-13 15:59 GMT+02:00 Junchao Zhang <jczh...@mcs.anl.gov>:

> On Thu, Apr 12, 2018 at 9:48 AM, Smith, Barry F. <bsm...@mcs.anl.gov>
> wrote:
>
>>
>>
>> > On Apr 12, 2018, at 3:59 AM, Patrick Sanan <patrick.sa...@gmail.com>
>> wrote:
>> >
>> > I also happened to stumble across this yesterday. Is the length
>> restriction for the default printer (l assume from the array of 8*1024
>> chars in PetscVFPrintfDefault() ) intended behavior to be documented, or a
>> bug to be fixed?
>>
>>  You could call it either. My problems are
>>
>> 1) that given a format string I don't know in advance how much work space
>> is needed so cannot accurately malloc, for sure, enough space
>>
>> 2) since this can be called in an error handler I really don't want it
>> calling malloc().
>>
> PetscVSNPrintf does still contain a malloc "122  ierr  =
> PetscMalloc1(oldLength, );CHKERRQ(ierr);"
> Also, vsnprintf returns "the number of characters that would have been
> written if n had been sufficiently large". I don't know why you void'ed
> it.
> We can not make the 8K chars a requirement since users don't know how many
> chars they want to print upfront.
> Anyway, crash is better than silent errors.
>
>>
>>Hence it lives in this limbo. I don't even know a way to add a good
>> error checking that detects if the buffer is long enough. All in all it is
>> bad ugly code, any suggestions on improvements would be appreciated.
>>
>>Barry
>>
>> >
>> > 2018-04-12 2:16 GMT+02:00 Rongliang Chen <rongliang.c...@gmail.com>:
>> > Thanks Barry. I found petsc-3.6 and older versions did not have this
>> restriction.
>> >
>> > Best,
>> > Rongliang
>> >
>> >
>> > On 04/12/2018 07:22 AM, Smith, Barry F. wrote:
>> >Yes, PetscPrintf() and related functions have a maximum string
>> length of about 8000 characters.
>> >
>> > Barry
>> >
>> >
>> > On Apr 11, 2018, at 6:17 PM, Rongliang Chen <rongliang.c...@gmail.com>
>> wrote:
>> >
>> > Dear All,
>> >
>> >
>> > When I tried to print a long string using PetscPrintf() I found that it
>> truncated the string. Attached is a simple example for this (run with
>> single processor). I used PetscPrintf() and printf() to print the same
>> string and the printf() seems OK. I am using petsc-3.8.4.
>> >
>> >
>> > Best,
>> >
>> > Rongliang
>> >
>> > 
>> >
>> >
>> >
>>
>>
>


Re: [petsc-dev] [petsc-users] PetscPrintf

2018-04-12 Thread Patrick Sanan
I also happened to stumble across this yesterday. Is the length restriction
for the default printer (l assume from the array of 8*1024 chars in
PetscVFPrintfDefault() ) intended behavior to be documented, or a bug to be
fixed?

2018-04-12 2:16 GMT+02:00 Rongliang Chen :

> Thanks Barry. I found petsc-3.6 and older versions did not have this
> restriction.
>
> Best,
> Rongliang
>
>
> On 04/12/2018 07:22 AM, Smith, Barry F. wrote:
>
>>Yes, PetscPrintf() and related functions have a maximum string length
>> of about 8000 characters.
>>
>> Barry
>>
>>
>> On Apr 11, 2018, at 6:17 PM, Rongliang Chen 
>>> wrote:
>>>
>>> Dear All,
>>>
>>>
>>> When I tried to print a long string using PetscPrintf() I found that it
>>> truncated the string. Attached is a simple example for this (run with
>>> single processor). I used PetscPrintf() and printf() to print the same
>>> string and the printf() seems OK. I am using petsc-3.8.4.
>>>
>>>
>>> Best,
>>>
>>> Rongliang
>>>
>>> 
>>>
>>
>
>


Re: [petsc-dev] upcoming release and testing

2018-04-05 Thread Patrick Sanan
Spellcheck fixes for dev.html :

- relavant --> relevant
- diffencing --> differencing
- seperately --> separately

Patch also attached.

2018-04-05 17:03 GMT+02:00 Satish Balay :

> Thanks Karl!
>
> BTW: Its best to update dev.html in master [and then I can rebase
> balay/release-3.9 over master. This way - the commit in
> balay/release-3.9 is the final commit for the release - and traceable
> later]
>
> So i've moved this commit over to master.
>
> Satish
>
> On Thu, 5 Apr 2018, Karl Rupp wrote:
>
> > Hi Satish,
> >
> > FYI: I added a mention of GPU backends available in the release and fixed
> > missing ul-tags in src/docs/website/documentation/changes/39.html in
> your
> > balay/release-3.9 branch.
> >
> > Best regards,
> > Karli
> >
> > On 04/02/2018 08:18 PM, Satish Balay wrote:
> > > All,
> > >
> > > It would be good if
> > > http://www.mcs.anl.gov/petsc/documentation/changes/dev.html is cheked
> and
> > > updated with any obvious missing stuff.
> > >
> > > Thanks,
> > > Satish
> > >
> >
>
>


0001-Docs-dev-changes-typos.patch
Description: Binary data


Re: [petsc-dev] Manual builds fail on master

2018-03-22 Thread Patrick Sanan
Thanks! I should have caught this myself by rebuilding all my local docs
instead of just the manual.

2018-03-22 17:05 GMT+01:00 Satish Balay <ba...@mcs.anl.gov>:

> >>>>>>>>>
> petsc@thwomp:/sandbox/petsc/petsc.clone/src/docs/tex/manual$
> /usr/bin/make  --no-print-directory manual.pdf
> LOC=/sandbox/petsc/petsc.clone GONULL=
> 
>
> aTeX Warning: Reference `ch_index' on page 7 undefined on input line 42.
>
> ) [7] [8] (./acknowltmp.tex
> ! You can't use `macro parameter character #' in horizontal mode.
> l.1 ...++man+manualpages/Mat/MatMPIAdjToSeq-.html#
>   MatMPIAdjToSeq-
> !  ==> Fatal error occurred, no output PDF file produced!
> Transcript written on manual1.log.
> make: *** [manual.pdf] Error 1
> <<<<<<<<<<<
>
> Perhaps the following fix:
>
> >>>>>>>>
>
> diff --git a/src/mat/impls/adj/mpi/mpiadj.c b/src/mat/impls/adj/mpi/
> mpiadj.c
> index 5241a8fae9..e8425726d9 100644
> --- a/src/mat/impls/adj/mpi/mpiadj.c
> +++ b/src/mat/impls/adj/mpi/mpiadj.c
> @@ -806,7 +806,7 @@ PETSC_EXTERN PetscErrorCode MatCreate_MPIAdj(Mat B)
>  }
>
>  /*@C
> -   MatMPIAdjToSeq- Converts an parallel MPIAdj matrix to complete MPIAdj
> on each process (needed by sequential preconditioners)
> +   MatMPIAdjToSeq - Converts an parallel MPIAdj matrix to complete MPIAdj
> on each process (needed by sequential preconditioners)
>
> Logically Collective on MPI_Comm
> <<<<<<
>
> Will check if this works..
>
> Satish
>
>
> On Thu, 22 Mar 2018, Smith, Barry F. wrote:
>
> >
> >I'll talk to Satish, it is now unclear (at least to me) where the
> manual is actually built and the output stored.
> >
> >
> > > On Mar 22, 2018, at 5:17 AM, Patrick Sanan <patrick.sa...@gmail.com>
> wrote:
> > >
> > > The manual and dev manual automated builds are failing on master, but
> I can't reproduce on any of my local builds and the output is now very
> terse, so it's not obvious to me how to debug. There were some updates
> recently to the manual make process, which could be related.
> > >
> > > I'd move to re-introduce more of the tex output by default when
> building the manuals - this clutters the log but overall I'd say would save
> time (and might be the most efficient way to debug this current issue).
> >
> >
>
>


  1   2   >